Artemis 2.0 cluster fail over issue

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Artemis 2.0 cluster fail over issue

mtod
I'm trying to get an Artemis 2.0 cluster working.

I have 3 Nodes Running on AWS Windows 2016 and Artemis 2.0
Java jdk 1.8.0_131

I can't seem to get them to fail over when I take the Live server offline I see no changes to the backup servers.
I'm not sure whats happening anyone have any ideas?

Thanks

Mike

Live Server excerpts:

16:48:17,209 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@bae7dc0[owner=ClusterConnectionImpl@547201549[nodeUUID=c20c017e-35c7-11e7-97fc-0e677709a282, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-121-51-121, address=jms, server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282]] is sending topology to Remote Proxy on channel 6c3f9531
16:48:17,211 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@bae7dc0[owner=ClusterConnectionImpl@547201549[nodeUUID=c20c017e-35c7-11e7-97fc-0e677709a282, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-121-51-121, address=jms, server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282]] sending c20c017e-35c7-11e7-97fc-0e677709a282 / Pair[a=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-121-51-121, b=null] to Remote Proxy on channel 6c3f9531
16:48:17,295 DEBUG [org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl] ClientSessionFactoryImpl received backup update for live/backup pair = TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-121-51-121 / null but it didn't belong to TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-121-51-121
16:48:17,887 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8161
16:48:17,899 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8161/jolokia
16:48:28,204 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] ClusterCommunication::Sending notification for addBinding LocalQueueBinding [address=ActiveMQ.Advisory.TempQueue, queue=QueueImpl[name=6addd362-772b-4675-9c97-fca4eb1fdada, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282], temp=true]@dae557e, filter=FilterImpl [sfilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1'], name=6addd362-772b-4675-9c97-fca4eb1fdada, clusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282] from server ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282
16:48:28,222 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't find any bindings for address=activemq.notifications on message=CoreMessage[messageID=6442452924,durable=true,userID=null,priority=0, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_Binding_Type=0,_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_NotifType=BINDING_ADDED,_AMQ_Binding_ID=6442452923,_AMQ_FilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1',_AMQ_NotifTimestamp=1494535708220,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282,foobar=2f6bb043-368b-11e7-a106-0e677709a282]]@2101112334
16:48:28,222 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message CoreMessage[messageID=6442452924,durable=true,userID=null,priority=0, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_Binding_Type=0,_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_NotifType=BINDING_ADDED,_AMQ_Binding_ID=6442452923,_AMQ_FilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1',_AMQ_NotifTimestamp=1494535708220,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282,foobar=2f6bb043-368b-11e7-a106-0e677709a282]]@2101112334 is not going anywhere as it didn't have a binding on address:activemq.notifications
16:48:28,293 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't find any bindings for address=activemq.notifications on message=CoreMessage[messageID=6442452927,durable=true,userID=null,priority=0, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_ConsumerCount=1,_AMQ_User=fmiuser,_AMQ_SessionName=ID:b3be4c964f10-37643-1494535648494-1:1:-1,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_RemoteAddress=/10.121.43.230:51534,_AMQ_NotifType=CONSUMER_CREATED,_AMQ_NotifTimestamp=1494535708292,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282]]@1025865428
16:48:28,293 DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message CoreMessage[messageID=6442452927,durable=true,userID=null,priority=0, timestamp=0,expiration=0, durable=true, address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_ConsumerCount=1,_AMQ_User=fmiuser,_AMQ_SessionName=ID:b3be4c964f10-37643-1494535648494-1:1:-1,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_RemoteAddress=/10.121.43.230:51534,_AMQ_NotifType=CONSUMER_CREATED,_AMQ_NotifTimestamp=1494535708292,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282]]@1025865428 is not going anywhere as it didn't have a binding on address:activemq.notifications

Backup Server 1:

16:20:53,563 INFO  [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
16:20:53,603 INFO  [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging)
16:20:53,629 INFO  [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.10
16:20:53,636 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal to C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.12
16:20:53,840 INFO  [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
16:20:53,885 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 4
16:20:53,887 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 4
16:20:53,894 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
16:20:53,896 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
16:20:53,898 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
16:20:53,899 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
16:20:53,900 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
16:20:53,901 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
16:20:53,906 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:20:53,906 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
16:20:53,953 FINE  [io.netty.buffer.AbstractByteBuf] -Dio.netty.buffer.bytebuf.checkAccessible: true
16:20:53,962 FINE  [io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
16:20:53,962 FINE  [io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
16:20:53,965 FINE  [io.netty.util.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@30e2e2a8
16:20:54,132 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending topology to QuorumManager(server=null)
16:20:54,165 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
16:20:54,169 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
16:20:54,170 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
16:20:54,171 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.mqtt.MQTTProtocolManagerFactory@705239f
16:20:54,171 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
16:20:54,173 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.openwire.OpenWireProtocolManagerFactory@504ab59c
16:20:54,174 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
16:20:54,174 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManagerFactory@3b825a20
16:20:54,174 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
16:20:54,280 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending topology to org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@7a2aca8e
16:20:54,692 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8161
16:20:54,693 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8161/jolokia

Backup Server 2:

16:21:11,762 INFO  [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
16:21:11,825 INFO  [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging)
16:21:11,888 INFO  [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.11
16:21:11,892 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal to C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.13
16:21:11,947 INFO  [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal
16:21:11,976 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 4
16:21:11,976 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 4
16:21:11,981 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192
16:21:11,983 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11
16:21:11,985 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216
16:21:11,987 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512
16:21:11,988 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256
16:21:12,008 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64
16:21:12,010 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:21:12,014 FINE  [io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192
16:21:12,077 FINE  [io.netty.buffer.AbstractByteBuf] -Dio.netty.buffer.bytebuf.checkAccessible: true
16:21:12,084 FINE  [io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.level: simple
16:21:12,092 FINE  [io.netty.util.ResourceLeakDetector] -Dio.netty.leakDetection.maxRecords: 4
16:21:12,095 FINE  [io.netty.util.ResourceLeakDetectorFactory] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@5bcba7e2
16:21:12,302 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@3bd801d9[owner=ServerLocatorImpl [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending topology to QuorumManager(server=null)
16:21:12,341 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
16:21:12,343 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
16:21:12,345 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
16:21:12,348 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.mqtt.MQTTProtocolManagerFactory@60819872
16:21:12,349 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
16:21:12,350 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.openwire.OpenWireProtocolManagerFactory@13959ee
16:21:12,351 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
16:21:12,352 WARN  [org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find persister for org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManagerFactory@767fe033
16:21:12,353 INFO  [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
16:21:12,453 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@3bd801d9[owner=ServerLocatorImpl [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending topology to org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@35cdb75c
16:21:12,915 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8161
16:21:12,922 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8161/jolokia

Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
From what I can see in the log the nodes aren't finding each other and forming a cluster.  What's your configuration?


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Thursday, May 11, 2017 3:44:19 PM
Subject: Artemis 2.0 cluster fail over issue

I'm trying to get an Artemis 2.0 cluster working.

I have 3 Nodes Running on AWS Windows 2016 and Artemis 2.0
Java jdk 1.8.0_131

I can't seem to get them to fail over when I take the Live server offline I
see no changes to the backup servers.
I'm not sure whats happening anyone have any ideas?

Thanks

Mike

Live Server excerpts:

16:48:17,209 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@bae7dc0[owner=ClusterConnectionImpl@547201549[nodeUUID=c20c017e-35c7-11e7-97fc-0e677709a282,
connector=TransportConfiguration(name=artemis,
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?port=61616&host=10-121-51-121, address=jms,
server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282]]
is sending topology to Remote Proxy on channel 6c3f9531
16:48:17,211 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@bae7dc0[owner=ClusterConnectionImpl@547201549[nodeUUID=c20c017e-35c7-11e7-97fc-0e677709a282,
connector=TransportConfiguration(name=artemis,
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?port=61616&host=10-121-51-121, address=jms,
server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282]]
sending c20c017e-35c7-11e7-97fc-0e677709a282 /
Pair[a=TransportConfiguration(name=artemis,
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?port=61616&host=10-121-51-121, b=null] to Remote Proxy on channel 6c3f9531
16:48:17,295 DEBUG
[org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl]
ClientSessionFactoryImpl received backup update for live/backup pair =
TransportConfiguration(name=artemis,
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?port=61616&host=10-121-51-121 / null but it didn't belong to
TransportConfiguration(name=artemis,
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?port=61616&host=10-121-51-121
16:48:17,887 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server
started at http://localhost:8161
16:48:17,899 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia
REST API available at http://localhost:8161/jolokia
16:48:28,204 DEBUG
[org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl]
ClusterCommunication::Sending notification for addBinding LocalQueueBinding
[address=ActiveMQ.Advisory.TempQueue,
queue=QueueImpl[name=6addd362-772b-4675-9c97-fca4eb1fdada,
postOffice=PostOfficeImpl
[server=ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282],
temp=true]@dae557e, filter=FilterImpl
[sfilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1'],
name=6addd362-772b-4675-9c97-fca4eb1fdada,
clusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282]
from server
ActiveMQServerImpl::serverUUID=c20c017e-35c7-11e7-97fc-0e677709a282
16:48:28,222 DEBUG
[org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't
find any bindings for address=activemq.notifications on
message=CoreMessage[messageID=6442452924,durable=true,userID=null,priority=0,
timestamp=0,expiration=0, durable=true,
address=activemq.notifications,properties=TypedProperties[_AMQ_Binding_Type=0,_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_NotifType=BINDING_ADDED,_AMQ_Binding_ID=6442452923,_AMQ_FilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1',_AMQ_NotifTimestamp=1494535708220,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282,foobar=2f6bb043-368b-11e7-a106-0e677709a282]]@2101112334
16:48:28,222 DEBUG
[org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message
CoreMessage[messageID=6442452924,durable=true,userID=null,priority=0,
timestamp=0,expiration=0, durable=true,
address=activemq.notifications,properties=TypedProperties[_AMQ_Binding_Type=0,_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_NotifType=BINDING_ADDED,_AMQ_Binding_ID=6442452923,_AMQ_FilterString=__AMQ_CID<>'ID:b3be4c964f10-37643-1494535648494-1:1',_AMQ_NotifTimestamp=1494535708220,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282,foobar=2f6bb043-368b-11e7-a106-0e677709a282]]@2101112334
is not going anywhere as it didn't have a binding on
address:activemq.notifications
16:48:28,293 DEBUG
[org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Couldn't
find any bindings for address=activemq.notifications on
message=CoreMessage[messageID=6442452927,durable=true,userID=null,priority=0,
timestamp=0,expiration=0, durable=true,
address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_ConsumerCount=1,_AMQ_User=fmiuser,_AMQ_SessionName=ID:b3be4c964f10-37643-1494535648494-1:1:-1,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_RemoteAddress=/10.121.43.230:51534,_AMQ_NotifType=CONSUMER_CREATED,_AMQ_NotifTimestamp=1494535708292,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282]]@1025865428
16:48:28,293 DEBUG
[org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message
CoreMessage[messageID=6442452927,durable=true,userID=null,priority=0,
timestamp=0,expiration=0, durable=true,
address=activemq.notifications,properties=TypedProperties[_AMQ_RoutingName=6addd362-772b-4675-9c97-fca4eb1fdada,_AMQ_Distance=0,_AMQ_ConsumerCount=1,_AMQ_User=fmiuser,_AMQ_SessionName=ID:b3be4c964f10-37643-1494535648494-1:1:-1,_AMQ_Address=ActiveMQ.Advisory.TempQueue,_AMQ_RemoteAddress=/10.121.43.230:51534,_AMQ_NotifType=CONSUMER_CREATED,_AMQ_NotifTimestamp=1494535708292,_AMQ_ClusterName=6addd362-772b-4675-9c97-fca4eb1fdadac20c017e-35c7-11e7-97fc-0e677709a282]]@1025865428
is not going anywhere as it didn't have a binding on
address:activemq.notifications

Backup Server 1:

16:20:53,563 INFO  [org.apache.activemq.artemis.integration.bootstrap]
AMQ101000: Starting ActiveMQ Artemis Server
16:20:53,603 INFO  [org.apache.activemq.artemis.core.server] AMQ221000:
backup Message Broker is starting with configuration Broker Configuration
(clustered=true,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging)
16:20:53,629 INFO  [org.apache.activemq.artemis.core.server] AMQ221055:
There were too many old replicated folders upon startup, removing
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.10
16:20:53,636 INFO  [org.apache.activemq.artemis.core.server] AMQ222162:
Moving data directory
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal to
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.12
16:20:53,840 INFO  [org.apache.activemq.artemis.core.server] AMQ221013:
Using NIO Journal
16:20:53,885 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.numHeapArenas: 4
16:20:53,887 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.numDirectArenas: 4
16:20:53,894 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.pageSize: 8192
16:20:53,896 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.maxOrder: 11
16:20:53,898 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.chunkSize: 16777216
16:20:53,899 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.tinyCacheSize: 512
16:20:53,900 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.smallCacheSize: 256
16:20:53,901 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.normalCacheSize: 64
16:20:53,906 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:20:53,906 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.cacheTrimInterval: 8192
16:20:53,953 FINE  [io.netty.buffer.AbstractByteBuf]
-Dio.netty.buffer.bytebuf.checkAccessible: true
16:20:53,962 FINE  [io.netty.util.ResourceLeakDetector]
-Dio.netty.leakDetection.level: simple
16:20:53,962 FINE  [io.netty.util.ResourceLeakDetector]
-Dio.netty.leakDetection.maxRecords: 4
16:20:53,965 FINE  [io.netty.util.ResourceLeakDetectorFactory] Loaded
default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@30e2e2a8
16:20:54,132 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[],
discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1',
refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending
topology to QuorumManager(server=null)
16:20:54,165 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-server]. Adding protocol support for: CORE
16:20:54,169 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-amqp-protocol]. Adding protocol support for:
AMQP
16:20:54,170 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-hornetq-protocol]. Adding protocol support
for: HORNETQ
16:20:54,171 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.mqtt.MQTTProtocolManagerFactory@705239f
16:20:54,171 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for:
MQTT
16:20:54,173 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.openwire.OpenWireProtocolManagerFactory@504ab59c
16:20:54,174 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-openwire-protocol]. Adding protocol support
for: OPENWIRE
16:20:54,174 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManagerFactory@3b825a20
16:20:54,174 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-stomp-protocol]. Adding protocol support
for: STOMP
16:20:54,280 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[],
discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1',
refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending
topology to
org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@7a2aca8e
16:20:54,692 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server
started at http://localhost:8161
16:20:54,693 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia
REST API available at http://localhost:8161/jolokia

Backup Server 2:

16:21:11,762 INFO  [org.apache.activemq.artemis.integration.bootstrap]
AMQ101000: Starting ActiveMQ Artemis Server
16:21:11,825 INFO  [org.apache.activemq.artemis.core.server] AMQ221000:
backup Message Broker is starting with configuration Broker Configuration
(clustered=true,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging)
16:21:11,888 INFO  [org.apache.activemq.artemis.core.server] AMQ221055:
There were too many old replicated folders upon startup, removing
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.11
16:21:11,892 INFO  [org.apache.activemq.artemis.core.server] AMQ222162:
Moving data directory
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal to
C:\Apache\apache-artemis-2.0.0\bin\brokerAWS1\.\data\journal\oldreplica.13
16:21:11,947 INFO  [org.apache.activemq.artemis.core.server] AMQ221013:
Using NIO Journal
16:21:11,976 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.numHeapArenas: 4
16:21:11,976 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.numDirectArenas: 4
16:21:11,981 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.pageSize: 8192
16:21:11,983 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.maxOrder: 11
16:21:11,985 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.chunkSize: 16777216
16:21:11,987 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.tinyCacheSize: 512
16:21:11,988 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.smallCacheSize: 256
16:21:12,008 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.normalCacheSize: 64
16:21:12,010 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:21:12,014 FINE  [io.netty.buffer.PooledByteBufAllocator]
-Dio.netty.allocator.cacheTrimInterval: 8192
16:21:12,077 FINE  [io.netty.buffer.AbstractByteBuf]
-Dio.netty.buffer.bytebuf.checkAccessible: true
16:21:12,084 FINE  [io.netty.util.ResourceLeakDetector]
-Dio.netty.leakDetection.level: simple
16:21:12,092 FINE  [io.netty.util.ResourceLeakDetector]
-Dio.netty.leakDetection.maxRecords: 4
16:21:12,095 FINE  [io.netty.util.ResourceLeakDetectorFactory] Loaded
default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@5bcba7e2
16:21:12,302 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@3bd801d9[owner=ServerLocatorImpl [initialConnectors=[],
discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1',
refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending
topology to QuorumManager(server=null)
16:21:12,341 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-server]. Adding protocol support for: CORE
16:21:12,343 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-amqp-protocol]. Adding protocol support for:
AMQP
16:21:12,345 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-hornetq-protocol]. Adding protocol support
for: HORNETQ
16:21:12,348 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.mqtt.MQTTProtocolManagerFactory@60819872
16:21:12,349 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for:
MQTT
16:21:12,350 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.openwire.OpenWireProtocolManagerFactory@13959ee
16:21:12,351 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-openwire-protocol]. Adding protocol support
for: OPENWIRE
16:21:12,352 WARN
[org.apache.activemq.artemis.spi.core.protocol.MessagePersister] Cannot find
persister for
org.apache.activemq.artemis.core.protocol.stomp.StompProtocolManagerFactory@767fe033
16:21:12,353 INFO  [org.apache.activemq.artemis.core.server] AMQ221043:
Protocol module found: [artemis-stomp-protocol]. Adding protocol support
for: STOMP
16:21:12,453 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@3bd801d9[owner=ServerLocatorImpl [initialConnectors=[],
discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1',
refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending
topology to
org.apache.activemq.artemis.core.server.impl.AnyLiveNodeLocatorForReplication@35cdb75c
16:21:12,915 INFO  [org.apache.activemq.artemis] AMQ241001: HTTP Server
started at http://localhost:8161
16:21:12,922 INFO  [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia
REST API available at http://localhost:8161/jolokia





--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
Thanks for the response.


3 systems running on AWS Windows 2016

I just trimed it down removing any SSL until I can get this working.
I followed the https://github.com/apache/activemq-artemis/tree/master/examples/features/ha/replicated-transaction-failover example.

Quick question how does the cluster know when a master fails is the a ping if so what port does that use?

Thanks for looking at this.

Mike

Server1 - Master

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

      <large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     



     
      <journal-buffer-timeout>1488000</journal-buffer-timeout>

    <connectors>
       
        <connector name="netty_connector">tcp://10.121.51.121:61616</connector>       
    </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor name="netty_acceptor">tcp://10.121.51.121:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor> 
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/> 
         </cluster-connection>
      </cluster-connections>

      <ha-policy>
         <replication>
            <master>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>                               
                                <check-for-live-server>true</check-for-live-server>
                        </master>
         </replication>
      </ha-policy>

          <management-address>jms.queue.activemq.management</management-address>
         
      <security-settings>
               
               
               
                <security-setting match="jms.queue.activemq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>
         
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>




Server2 - Slave

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

      <large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     

     
      <journal-buffer-timeout>1364000</journal-buffer-timeout>

    <connectors>
       
        <connector name="netty_connector">tcp://10.121.49.225:61616</connector>
        </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor name="netty_acceptor">tcp://10.121.49.225:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/> 
         </cluster-connection>
      </cluster-connections>

      <ha-policy>
         <replication>
            <slave>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>                                               
                                <allow-failback>true</allow-failback>
                        </slave>
         </replication>
      </ha-policy>

      <security-settings>
            <security-setting match="jms.queue.hornetq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>

         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>


Server3 - Slave

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

      <large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     

     
      <journal-buffer-timeout>1488000</journal-buffer-timeout>

    <connectors>
       
        <connector name="netty_connector">tcp://10.121.44.48:61616</connector>
   </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor name="netty_acceptor">tcp://10.121.44.48:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor> 
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>
 
      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/> 
         </cluster-connection>
      </cluster-connections>
         
      <ha-policy>
         <replication>
            <slave>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>                               
                                <allow-failback>true</allow-failback>
                        </slave>
         </replication>
      </ha-policy>

          <management-address>jms.queue.activemq.management</management-address>
         
      <security-settings>
               
               
               
                <security-setting match="jms.queue.activemq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>
         
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
> Quick question how does the cluster know when a master fails is the a ping if so what port does that use?

Once the live and backup brokers discover each other they connect to each other via TCP. When that TCP connection breaks the backup essentially knows the live is dead.


Your <discovery-group> and <broadcast-group> are configured to use UDP multicast. Is that supported in your cloud environment?


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 10:47:55 AM
Subject: Re: Artemis 2.0 cluster fail over issue

Thanks for the response.


3 systems running on AWS Windows 2016

I just trimed it down removing any SSL until I can get this working.
I followed the
https://github.com/apache/activemq-artemis/tree/master/examples/features/ha/replicated-transaction-failover
example.

Quick question how does the cluster know when a master fails is the a ping
if so what port does that use?

Thanks for looking at this.

Mike

Server1 - Master

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq
/schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

     
<large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     



     
      <journal-buffer-timeout>1488000</journal-buffer-timeout>

    <connectors>
       
        <connector
name="netty_connector">tcp://10.121.51.121:61616</connector>
    </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor
name="netty_acceptor">tcp://10.121.51.121:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/>
         </cluster-connection>
      </cluster-connections>

      <ha-policy>
         <replication>
            <master>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>
                                <check-for-live-server>true</check-for-live-server>
                        </master>
         </replication>
      </ha-policy>

          <management-address>jms.queue.activemq.management</management-address>
         
      <security-settings>
               
               
               
                <security-setting match="jms.queue.activemq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>
         
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>




Server2 - Slave

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq
/schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

     
<large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     

     
      <journal-buffer-timeout>1364000</journal-buffer-timeout>

    <connectors>
       
        <connector
name="netty_connector">tcp://10.121.49.225:61616</connector>
        </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor
name="netty_acceptor">tcp://10.121.49.225:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/>
         </cluster-connection>
      </cluster-connections>

      <ha-policy>
         <replication>
            <slave>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>
                                <allow-failback>true</allow-failback>
                        </slave>
         </replication>
      </ha-policy>

      <security-settings>
            <security-setting match="jms.queue.hornetq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>

         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>


Server3 - Slave

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq
/schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>fmibroker</name>

      <persistence-enabled>true</persistence-enabled>
         
         
          <jmx-management-enabled>true</jmx-management-enabled>

     
      <journal-type>NIO</journal-type>

      <paging-directory>./data/paging</paging-directory>

      <bindings-directory>./data/bindings</bindings-directory>

      <journal-directory>./data/journal</journal-directory>

     
<large-messages-directory>./data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>-1</journal-pool-files>

     

     

     
     

     
     

     
     

     
     

     
      <journal-buffer-timeout>1488000</journal-buffer-timeout>

    <connectors>
       
        <connector
name="netty_connector">tcp://10.121.44.48:61616</connector>
   </connectors>

     
      <disk-scan-period>5000</disk-scan-period>

     
      <max-disk-usage>90</max-disk-usage>

     
      <global-max-size>100Mb</global-max-size>

      <acceptors>
         
         <acceptor
name="netty_acceptor">tcp://10.121.44.48:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE</acceptor>
      </acceptors>

      <cluster-user>xxxxx</cluster-user>
      <cluster-password>xxxxx</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <broadcast-period>5000</broadcast-period>
            <connector-ref>netty_connector</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>${udp-address:231.7.7.7}</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>
 
      <cluster-connections>
         <cluster-connection name="fmieastcluster">
                       
            <connector-ref>netty_connector</connector-ref>
                        <retry-interval>1000</retry-interval>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <discovery-group-ref discovery-group-name="dg-group1"/>
         </cluster-connection>
      </cluster-connections>
         
      <ha-policy>
         <replication>
            <slave>
                                <cluster-name>fmieastcluster</cluster-name>
                                <group-name>useast</group-name>
                                <allow-failback>true</allow-failback>
                        </slave>
         </replication>
      </ha-policy>

          <management-address>jms.queue.activemq.management</management-address>
         
      <security-settings>
               
               
               
                <security-setting match="jms.queue.activemq.management">
                        <permission type="manage" roles="admin" />
                </security-setting>
         
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="admin"/>
            <permission type="deleteNonDurableQueue" roles="admin"/>
            <permission type="createDurableQueue" roles="admin"/>
            <permission type="deleteDurableQueue" roles="admin"/>
            <permission type="createAddress" roles="admin"/>
            <permission type="deleteAddress" roles="admin"/>
            <permission type="consume" roles="admin"/>
            <permission type="browse" roles="admin"/>
            <permission type="send" roles="admin"/>
           
            <permission type="manage" roles="admin"/>
         </security-setting>
      </security-settings>

      <address-settings>
         
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
           
            <max-size-bytes>-1</max-size-bytes>
           
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>




--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726064.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod

The UDP multi cast was just added to align the config with the example I tried it both ways.
Note: the log in the original post was without that option such as :

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
So...what were you using for discovery and broadcast when you weren't using UDP multicast?


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 11:31:57 AM
Subject: Re: Artemis 2.0 cluster fail over issue


The UDP multi cast was just added to align the config with the example I
tried it both ways.
Note: the log in the original post was without that option such as :

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>10000</refresh-timeout>
         </discovery-group>
      </discovery-groups>



--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726070.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
I just did not have the ${udp-address: } just the ip 231.7.7.7 default from the atremis create function.

I just happen to notice that's how they had it in the example to I gave it a try. :)


Thanks

Mike
 
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
Looking at the slaves I see an entry in the log for topology I'm not sure what the initialConnectors=[] mean.

12:57:26,528 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology] Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending topology to QuorumManager(server=null)
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
Using ${udp-address:231.7.7.7} is functionally equivalent to 231.7.7.7 since both are just details of the overall UDP multicast configuration. You still haven't answered my question about whether or not UDP multicast is supported in your cloud environment. Can you speak to this point?


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 11:47:21 AM
Subject: Re: Artemis 2.0 cluster fail over issue

Looking at the slaves I see an entry in the log for topology I'm not sure
what the initialConnectors=[] mean.

12:57:26,528 DEBUG [org.apache.activemq.artemis.core.client.impl.Topology]
Topology@4d2d5542[owner=ServerLocatorImpl [initialConnectors=[],
discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1',
refreshTimeout=10000, discoveryInitialWaitTimeout=10000}]] is sending
topology to QuorumManager(server=null)




--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726076.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
Hmm I had to verify to configuration looks like UDP is allowed:

I'm not an expert on IP but it looks like the 231.7.7.7 would not be able to send out to the server.

outbound

Inbound
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
I'm confused. You say, "looks like UDP is allowed," and then you say, "it looks like the 231.7.7.7 would not be able to send." Then you attached some screen shots of what looks like some kind of administration console that I don't recognize. So is UDP multicast supported in your cloud environment or not? A simple yes or no will suffice.

At this point I'm not sure I can provide the kind of help you need so I'll just say a few things in the hope that they'll be productive:

  1) Most cloud environments don't support UDP multicast so configuring your cluster to use UDP multicast in a cloud environment is likely to result in a broken cluster.
  2) If your cloud environment doesn't support UDP multicast then you can either use static clustering or JGroups. You can see examples of these configuration in the "clustered-static-discovery" and "clustered-jgroups" examples respectively.


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 12:34:24 PM
Subject: Re: Artemis 2.0 cluster fail over issue

Hmm I had to verify to configuration looks like UDP is allowed:

I'm not an expert on IP but it looks like the 231.7.7.7 would not be able to
send out to the server.

<http://activemq.2283324.n4.nabble.com/file/n4726081/outbound.png>

<http://activemq.2283324.n4.nabble.com/file/n4726081/inbound.png>



--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726081.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
To answer your question yes it does support UDP. But to add no it does not support the multicast ip address.

The images shows the inbound and outbound rules broken down by protocol and ip ranges for the AWS image. The outbound has no limits all protocols and all IP's but the inbound shows all protocols but limits the ip thus does not include the address 231.7.7.7 but will allow UDP.

No need to respond and thanks for you help I'll figure it out.

Mike



Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
> To answer your question yes it does support UDP.

I was asking about *UDP multicast*. That's not the same as plain UDP.


> no it does not support the multicast ip address.

Then it does not support UDP multicast since a multicast address (e.g. 231.7.7.7) must be used.


Therefore your cluster will not form properly given your current environment and configuration.


Justin

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 1:04:19 PM
Subject: Re: Artemis 2.0 cluster fail over issue

To answer your question yes it does support UDP. But to add no it does not
support the multicast ip address.

The images shows the inbound and outbound rules broken down by protocol and
ip ranges for the AWS image. The outbound has no limits all protocols and
all IP's but the inbound shows all protocols but limits the ip thus does not
include the address 231.7.7.7 but will allow UDP.

No need to respond and thanks for you help I'll figure it out.

Mike







--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726084.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

mtod
Just to close this out I setup static connections and it's failing over fine.

I would have preferred to use the discovery but seems that's not possible at this time on AWS.

Thanks for your help

Mike
Reply | Threaded
Open this post in threaded view
|

Re: Artemis 2.0 cluster fail over issue

jbertram
I believe discovery is still possible using JGroups (e.g. using S3_PING [1] or AWS_PING [2]).


Justin

[1] http://jgroups.org/manual/index.html#_s3_ping
[2] http://jgroups.org/manual/index.html#_aws_ping

----- Original Message -----
From: "mtod" <[hidden email]>
To: [hidden email]
Sent: Friday, May 12, 2017 1:57:11 PM
Subject: Re: Artemis 2.0 cluster fail over issue

Just to close this out I setup static connections and it's failing over fine.

I would have preferred to use the discovery but seems that's not possible at
this time on AWS.

Thanks for your help

Mike



--
View this message in context: http://activemq.2283324.n4.nabble.com/Artemis-2-0-cluster-fail-over-issue-tp4726002p4726089.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.