Running several instances of ActiveMQ behind a proxy/load balancer

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Running several instances of ActiveMQ behind a proxy/load balancer

sergii
Hi,

So, I'm trying to have a failover setup for my ActiveMQserver at AWS.
Actually, it should not really matter if it's AWS or not, I think my setup
is generic.

What I did was the following:
- run two ECS tasks with ActiveMQdocker container
- put an ELB (load-balancer) in front those two tasks
- added EFS (AWS' variant of NFS) there and pointed my ActiveMQ's data dir
there

The problem is that now my ActiveMQs are running in master-slave mode.
Meaning that if the load balancer transfers request to the slave (which
happens like 50% of the time), the connection will be dropped and the client
will need to reconnect until it gets routed to the master.

So, how can I run several instances of ActiveMQ behind a proxy/load
balancer?

Thank you,
Sergii



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Reply | Threaded
Open this post in threaded view
|

Re: Running several instances of ActiveMQ behind a proxy/load balancer

Dan Langford
Here is what I am doing but my setup is a little different. I happen to be
using ActiveMQ Artemis. I am also in Master/Slave mode and behind a load
balancer. In ActiveMQ Artemis the slave will stop listening on the
necessary ports (5671 for me). My load balancer (which is not ELB) is
configured to use those ports to determine if the servers are "alive"
(health checks) and should be considered as part of the "load balancing".
Since the slave stops listening on those ports traffic only gets routed to
the Master. If the master stops listening on those ports (crash or
maintenance) then the slave takes over, starts listening, and the load
balancer automatically recognizes the change based on listening ports and
routes all traffic to the slave.

So if you want to keep Master/Slave mode then you could look into ELB for a
health check on the ports that the clients use.
If you would rather have multiple brokers than can truly have traffic load
balanced across them then i think you need a different clustering strategy.
Instead of master slave then you are probably looking at a "network of
brokers" in which all the nodes can take traffic and then messages can flow
amongst the nodes.

disclaimer: i am a fairly new to the activemq family of brokers meaning i
am quite possibly very wrong

On Thu, Oct 12, 2017 at 9:57 AM sergii <[hidden email]> wrote:

> Hi,
>
> So, I'm trying to have a failover setup for my ActiveMQserver at AWS.
> Actually, it should not really matter if it's AWS or not, I think my setup
> is generic.
>
> What I did was the following:
> - run two ECS tasks with ActiveMQdocker container
> - put an ELB (load-balancer) in front those two tasks
> - added EFS (AWS' variant of NFS) there and pointed my ActiveMQ's data dir
> there
>
> The problem is that now my ActiveMQs are running in master-slave mode.
> Meaning that if the load balancer transfers request to the slave (which
> happens like 50% of the time), the connection will be dropped and the
> client
> will need to reconnect until it gets routed to the master.
>
> So, how can I run several instances of ActiveMQ behind a proxy/load
> balancer?
>
> Thank you,
> Sergii
>
>
>
> --
> Sent from:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
>
Reply | Threaded
Open this post in threaded view
|

Re: Running several instances of ActiveMQ behind a proxy/load balancer

sergii
Yes, it's a nice workaround with the health checks. The only problem is that
ELB will think that the slave is defect and will continuously restart it.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Reply | Threaded
Open this post in threaded view
|

Re: Running several instances of ActiveMQ behind a proxy/load balancer

Tim Bain
I've used ActiveMQ and AWS ELBs, but not the two together, so this response
is based solely on what I know about the two products and on prior posts on
this mailing list.

I don't believe you'll be able to make this work using AWS ELBs. As you
said, ELBs have a pretty simple model of unhealthy instances, and they will
terminate them and replace them when they're determined to be unhealthy.
From an ELB standpoint, it's fronting a homogenous pool of hosts that can
be used interchangeably, and there are no accommodations available - that
I'm aware of - for having a pool of heterogenous instances with different
roles (even transient roles like master and slave).

I assume that a part of your goal in having an ELB is to have a constant
way to refer to the broker(s) even in the face of instance termination
(which would result in a new IP address). Route 53 or elastic IPs could
both be used to solve that problem, but another way to do it would be to
have two ELBs that are both backed by an ASG with a target size of 1, and
have clients use failover:(elb1,elb2) when connecting. Then you can
terminate and recreate the hosts behind the two ELBs at will without
modifying the client URI.

Tim

On Mon, Oct 16, 2017 at 4:12 AM, sergii <[hidden email]> wrote:

> Yes, it's a nice workaround with the health checks. The only problem is
> that
> ELB will think that the slave is defect and will continuously restart it.
>
>
>
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-
> f2341805.html
>