Artemis master/slave startup results in two live broker

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Artemis master/slave startup results in two live broker

martk
This post was updated on .
Hi everyone,

I am using a master/slave setup with replication and Artemis 2.1.0.

If I do a kill of the master the slave gets live (as configured). But after starting the master again it directly gets live and there is no replication any more. The slave still continues as live.

Extract of the master.xml ("check-for-live-server" set to false remains the same results):
      <ha-policy>
         <replication>
            <master>
               <check-for-live-server>true</check-for-live-server>
            </master>
         </replication>
      </ha-policy>

      <connectors>
         <connector name="netty-connector">tcp://192.168.0.10:61600</connector>
         <connector name="netty-backup-connector-slave">tcp://192.168.0.11:61600</connector>
      </connectors>

      <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
      </acceptors>

      <cluster-connections>
         <cluster-connection name="cluster">
            <connector-ref>netty-connector</connector-ref>
            <retry-interval>500</retry-interval>
            <use-duplicate-detection>true</use-duplicate-detection>
            <static-connectors allow-direct-connections-only="true">
               <connector-ref>netty-backup-connector-slave</connector-ref>
            </static-connectors>
         </cluster-connection>
      </cluster-connections>

Extract of the slave.xml
      <ha-policy>
         <replication>
            <slave>
               <allow-failback>false</allow-failback>
            </slave>
         </replication>
      </ha-policy>

      <connectors>
         <connector name="netty-live-connector">tcp://192.168.0.10:61600</connector>
         <connector name="netty-connector">tcp://192.168.0.11:61600</connector>
      </connectors>

      <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
      </acceptors>

      <cluster-connections>
         <cluster-connection name="cluster">
            <connector-ref>netty-connector</connector-ref>
            <retry-interval>500</retry-interval>
            <use-duplicate-detection>true</use-duplicate-detection>
            <static-connectors allow-direct-connections-only="true">
               <connector-ref>netty-live-connector</connector-ref>
            </static-connectors>
         </cluster-connection>
      </cluster-connections>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

Justin Bertram
Your XML didn't come through for some reason.  I can't see what your
configuration is.


Justin

On Wed, Jul 19, 2017 at 9:37 AM, martk <[hidden email]> wrote:

> Hi everyone,
>
> I am using a master/slave setup with replication and Artemis 2.1.0.
>
> If I do a kill of the master the slave gets live (as configured). But after
> starting the master again it directly gets live and there is no replication
> any more. The slave still continues as live.
>
> The master contains the following (the setting to false remains the same
> results):
>
>
> An the slave the following:
>
>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/Artemis-master-slave-startup-results-in-two-
> live-broker-tp4728676.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

martk
Extract of the master.xml ("check-for-live-server" set to false remains the same results):

      <ha-policy>
         <replication>
            <master>
               <check-for-live-server>true</check-for-live-server>
            </master>
         </replication>
      </ha-policy>

      <connectors>
         <connector name="netty-connector">tcp://192.168.0.10:61600</connector>
         <connector name="netty-backup-connector-slave">tcp://192.168.0.11:61600</connector>
      </connectors>

      <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
      </acceptors>

      <cluster-connections>
         <cluster-connection name="cluster">
            <connector-ref>netty-connector</connector-ref>
            <retry-interval>500</retry-interval>
            <use-duplicate-detection>true</use-duplicate-detection>
            <static-connectors allow-direct-connections-only="true">
               <connector-ref>netty-backup-connector-slave</connector-ref>
            </static-connectors>
         </cluster-connection>
      </cluster-connections>


Extract of the slave.xml

      <ha-policy>
         <replication>
            <slave>
               <allow-failback>false</allow-failback>
            </slave>
         </replication>
      </ha-policy>

      <connectors>
         <connector name="netty-live-connector">tcp://192.168.0.10:61600</connector>
         <connector name="netty-connector">tcp://192.168.0.11:61600</connector>
      </connectors>

      <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
      </acceptors>

      <cluster-connections>
         <cluster-connection name="cluster">
            <connector-ref>netty-connector</connector-ref>
            <retry-interval>500</retry-interval>
            <use-duplicate-detection>true</use-duplicate-detection>
            <static-connectors allow-direct-connections-only="true">
               <connector-ref>netty-live-connector</connector-ref>
            </static-connectors>
         </cluster-connection>
      </cluster-connections>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

Justin Bertram
Artemis ships with an example that mimics your use-case called
'replicated-failback-static'.  Does this example work for you?  You can run
it from the <ARTEMIS_HOME>/examples/features/ha/replicated-failback-static
directory using 'mvn verify'.


Justin

On Wed, Jul 19, 2017 at 9:56 AM, martk <[hidden email]> wrote:

> Extract of the master.xml ("check-for-live-server" set to false remains the
> same results):
>
>       <ha-policy>
>          <replication>
>             <master>
>                <check-for-live-server>true</check-for-live-server>
>             </master>
>          </replication>
>       </ha-policy>
>
>       <connectors>
>          <connector
> name="netty-connector">tcp://192.168.0.10:61600</connector>
>          <connector
> name="netty-backup-connector-slave">tcp://192.168.0.11:61600</connector>
>       </connectors>
>
>       <acceptors>
>          <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
>       </acceptors>
>
>       <cluster-connections>
>          <cluster-connection name="cluster">
>             <connector-ref>netty-connector</connector-ref>
>             <retry-interval>500</retry-interval>
>             <use-duplicate-detection>true</use-duplicate-detection>
>             <static-connectors allow-direct-connections-only="true">
>                <connector-ref>netty-backup-connector-slave</connector-ref>
>             </static-connectors>
>          </cluster-connection>
>       </cluster-connections>
>
>
> Extract of the slave.xml
>
>       <ha-policy>
>          <replication>
>             <slave>
>                <allow-failback>false</allow-failback>
>             </slave>
>          </replication>
>       </ha-policy>
>
>       <connectors>
>          <connector
> name="netty-live-connector">tcp://192.168.0.10:61600</connector>
>          <connector
> name="netty-connector">tcp://192.168.0.11:61600</connector>
>       </connectors>
>
>       <acceptors>
>          <acceptor name="netty-acceptor">tcp://0.0.0.0:61600</acceptor>
>       </acceptors>
>
>       <cluster-connections>
>          <cluster-connection name="cluster">
>             <connector-ref>netty-connector</connector-ref>
>             <retry-interval>500</retry-interval>
>             <use-duplicate-detection>true</use-duplicate-detection>
>             <static-connectors allow-direct-connections-only="true">
>                <connector-ref>netty-live-connector</connector-ref>
>             </static-connectors>
>          </cluster-connection>
>       </cluster-connections>
>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.
> nabble.com/Artemis-master-slave-startup-results-in-two-
> live-broker-tp4728676p4728679.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

martk
This post was updated on .
The example works. But on one host my configuration is also working. Try it on two different hosts Justin. I am not sure, but maybe Clebert does already know where to look at in Artemis.
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

clebertsuconic
There's a recent fix on artemis master.. due as 2.3.0 next week. Can
you try that one?
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

martk
Thanks Clebert. I have tested it with the current master (commit f138bc5284c15a3ddc459246d730a2ff316b3e88) and my scenario is working (check-for-live-server>true and allow-failback>false).

The version of the artemis master pom.xml is 2.2.0-SNAPSHOT. So this would be in the 2.2.0 release?
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

clebertsuconic
Yip.  Working on it now.

On Mon, Jul 24, 2017 at 4:07 AM martk <[hidden email]> wrote:

> Thanks Clebert. I have tested it with the current master (commit
> f138bc5284c15a3ddc459246d730a2ff316b3e88) and my scenario is working
> (check-for-live-server>true and allow-failback>false).
>
> The version of the artemis master pom.xml is 2.2.0-SNAPSHOT. So this would
> be in the 2.2.0 release?
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Artemis-master-slave-startup-results-in-two-live-broker-tp4728676p4728789.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: Artemis master/slave startup results in two live broker

boyangfan
Hi all,

I am running the exact same static configurations using 2.6.4 version but in
a Kubernetes cluster and I see that I still end up with two live servers
after the original live is brought down. Any insight on this?


Thanks



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html