How to limit queue depth by #msgs

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

How to limit queue depth by #msgs

Petter Nordlander
Hi,

Is there a way to limit the queue depth of an ActiveMQ queue in number of messages?

I know there are ”per destination policies” that can detect queue usage in terms of memory used. However, the number of messages may indicate other things, like how many .log files (kahadb) that can be tied up by a certain queue where the consumer is infrequent (or just unstable).

BR Petter
Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

Tim Bain
Petter,

I'm not aware of a way to limit queue depth by number of messages that will
invoke producer flow control, which is the behavior I assume you want to
result when you hit the limit.  We actually just disabled per-destination
memory limits in our broker because of the difficulty of guaranteeing that
we'd always be able to fit a full prefetch buffer worth of messages into N
MB of space; without that, we risked flow controlling producers when
consumers were slow (or entirely unresponsive) before the broker built up
enough messages to consider the consumer slow and abort it via the
AbortSlowConsumerStrategy.  So there's probably an enhancement request that
should get submitted to allow per-destination limits to be set in terms of
number of messages.  (I just searched for an existing enhancement request
to cover this and didn't find one.)

If you're looking to discard messages when you hit the limit rather than
flow control producers, you can use one of the *PendingMessageLimitStrategy
implementations to discard messages without flow controlling producers.
But I'd guess this probably isn't what you're looking for.

Tim

On Thu, Jan 8, 2015 at 12:35 AM, Petter Nordlander <
[hidden email]> wrote:

> Hi,
>
> Is there a way to limit the queue depth of an ActiveMQ queue in number of
> messages?
>
> I know there are ”per destination policies” that can detect queue usage in
> terms of memory used. However, the number of messages may indicate other
> things, like how many .log files (kahadb) that can be tied up by a certain
> queue where the consumer is infrequent (or just unstable).
>
> BR Petter
>
Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

Petter Nordlander
Tim,
Thanks for the reply. Prefetch is not really a problem, since in this
scenario I tend to use 1 as reliability is the only concern, not
performance in this case.

Number of messages is sometime a better measurement than memory, and a
more valid constraint. It¹s easier to cope with a requirement that ²we
will buffer N orders² rather than ²we will buffer N MB worth of orders².

I will consider adding a feature request and, if it¹s not too time
consuming, see what it takes to submit a patch.

BR Petter

Den 2015-01-08 16:08 skrev Tim Bain <[hidden email]>:

>Petter,
>
>I'm not aware of a way to limit queue depth by number of messages that
>will
>invoke producer flow control, which is the behavior I assume you want to
>result when you hit the limit.  We actually just disabled per-destination
>memory limits in our broker because of the difficulty of guaranteeing that
>we'd always be able to fit a full prefetch buffer worth of messages into N
>MB of space; without that, we risked flow controlling producers when
>consumers were slow (or entirely unresponsive) before the broker built up
>enough messages to consider the consumer slow and abort it via the
>AbortSlowConsumerStrategy.  So there's probably an enhancement request
>that
>should get submitted to allow per-destination limits to be set in terms of
>number of messages.  (I just searched for an existing enhancement request
>to cover this and didn't find one.)
>
>If you're looking to discard messages when you hit the limit rather than
>flow control producers, you can use one of the
>*PendingMessageLimitStrategy
>implementations to discard messages without flow controlling producers.
>But I'd guess this probably isn't what you're looking for.
>
>Tim
>
>On Thu, Jan 8, 2015 at 12:35 AM, Petter Nordlander <
>[hidden email]> wrote:
>
>> Hi,
>>
>> Is there a way to limit the queue depth of an ActiveMQ queue in number
>>of
>> messages?
>>
>> I know there are ²per destination policies² that can detect queue usage
>>in
>> terms of memory used. However, the number of messages may indicate other
>> things, like how many .log files (kahadb) that can be tied up by a
>>certain
>> queue where the consumer is infrequent (or just unstable).
>>
>> BR Petter
>>

Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

artnaseef
That's an interesting requirement.  The memory usage limit is there to help ensure the broker's resources are not exhausted.

Is the end intent the same in using a message count?
Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

Tim Bain
I can't speak to Petter's scenario, but in my use case that's the intent:
we don't want to allow an infinite number of messages to build up on the
broker.  But the slow consumer abort strategy is also a part of the
protection strategy for the broker (since we primarily use topics with
non-durable subscriptions and aborting a slow consumer will dump the
messages it has built up) and we're unwilling to sacrifice the ability to
abort slow topic consumers in order to get the more-granular memory
protection.  We'd love to have both, but to make that work they both have
to be using the same metric for their counts (and it would be easier to
switch memoryLimit to use message counts than to switch slow consumer abort
strategy to use total bytes).

On Thu, Jan 8, 2015 at 3:28 PM, artnaseef <[hidden email]> wrote:

> That's an interesting requirement.  The memory usage limit is there to help
> ensure the broker's resources are not exhausted.
>
> Is the end intent the same in using a message count?
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/How-to-limit-queue-depth-by-msgs-tp4689633p4689670.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

Petter Nordlander
In reply to this post by artnaseef
Yes, that¹s my intent. Keeping bad behaving producers/consumers from
affecting time critical messages/queues.

The memory limit per queue is really only applicable when vm-cursor is
used. If a store cursor is used (which is almost the only option on high
volume, low memory systems), then data is swapped to disk. And you cannot
really constraint a queue from using x MB/GB of store space. The
storeUsageHighWatermark only consider global store usage at a single point.

Calculating the actual store usage (on disk) for a certain queue is likely
very hard, since you need to take into account messages blocking kahaDB
transaction log files containing mostly consumed messages. A ²queue depth²
limit would be an easy and store/cursor/persistent independent way to
limit a producer to fill up the broker. It¹s also very easy to communicate
this restriction to end users, i.e. ²We will buffer up to 10k messages for
your queue, if you need more, then we need to implement the consumer as a
high available solution to not risk other communication².

BR Petter

Den 2015-01-08 23:28 skrev artnaseef <[hidden email]>:

>That's an interesting requirement.  The memory usage limit is there to
>help
>ensure the broker's resources are not exhausted.
>
>Is the end intent the same in using a message count?
>
>
>
>--
>View this message in context:
>http://activemq.2283324.n4.nabble.com/How-to-limit-queue-depth-by-msgs-tp4
>689633p4689670.html
>Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: How to limit queue depth by #msgs

artnaseef
OK, please see this jira ticket and vote for it!

https://issues.apache.org/jira/browse/AMQ-5522

Please make comments in the ticket, or let me know, if any updates to the ticket are desired.