Flow control on a recursive system, guidelines?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Flow control on a recursive system, guidelines?

Francesco Vivoli
Hi all

I have been trying to get a stable configuration for over a month now, but
I keep encountering a variety of issues[1] that reduces everything to an unutilizable
system.
I have now built a simpler system, which perhaps exposes the same behavior,
it is just a pipeline of two queues

seeds--->q1--->L1--->q2--->L2

what happens is that when L1 receives a message it sends a number of messages to q2.
I run tests changing the number of messages to be sent to q1 and the number of
messages to be forwarded by L1 to q1 (toSend and toForward respectively)

A first scenario representing a real use case is having toSend small (1-10) and toForward
big(10k-100k)

A first observation is that if using kaha persistence everything eventually stops, when the broker
starts to be too full[2] of messages, having the attached dump. Thus I'm using the default
jdbc persistence.

A general behavior that I've seen is that if toSend is big (10k) then no flow control
seems to take place, as L2 eventually stops receiving messages when the broker
runs out of memory (when not using an usagemanageer), or the heap space is exahusted
(when setting it to something like 1Gb).
In this case I'd expect that sends to L1 are slowed down, to allow L2 to consume the (more) messages
that are arriving. but this doesn't happen, seeds are kept being sent at the same rate.

On the other hand having toSend<<toForward makes all the messages to be delivered,
but after some time everything slows awfully down (1msg/second maybe...), mainly because
the jvm is garbage collecting all the time (the heap space is being all used up).


Basically I need some way to slow down the first producer so that the whole system is not flooded
with messages that then I can barely consume.
My question then is, how should I configure the destination and broker memory limits, and eventually
the prefetch values so that I don't either run out of memory or end up with a frozen system?

Setting higher memory limits causes the JVM to need a bigger heap space (which makes it less stable
in uncontrolled environments) but keeping them low seems that prevents the broker
to send more messages at some point (again [1]). On the other side, being everything in the
same VM, I don't know if it's better to set prefetch limits higher or lower, as pending messages
have to be stored somewhere, either on the broker or on the consumer...
Reading about slow consumers[3] doesn't point me to any option, I can't discard messaging.
So I end up considering implementing a sort of timing between sends, or waiting for
message spooling, which I read should come with 4.2...


Sorry for the long message, but it's hard to express all of the above, there are probably too
many options to be considered.
Any help, guideline or hint would be most apreciated, it seems that this project is not
going to be released ever:(
For the interested is <a href="http://www.ripe.net/info/stats/hostcount/hostcount++/">http://www.ripe.net/info/stats/hostcount/hostcount++/


Thanks everybody,
Francesco

[1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
[2]: I haven't got any measure for too full, but I'd say something like 800k messages,
with an heap space of 128Mb
[3]:  http://activemq.org/site/slow-consumer-handling.htmlkahadump.log
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

rajdavies
Definitely the best approach would be to use the message spooling -  
which is available in 4.2 (though I'm hoping that the 4.2 release  
will in fact be renamed the 5.0 release).
There should be some milestone releases for 4.2 available in the next  
couple of weeks

cheers,

Rob
On 29 Dec 2006, at 11:45, drvillo wrote:

>
> Hi all
>
> I have been trying to get a stable configuration for over a month  
> now, but
> I keep encountering a variety of issues[1] that reduces everything  
> to an
> unutilizable
> system.
> I have now built a simpler system, which perhaps exposes the same  
> behavior,
> it is just a pipeline of two queues
>
> seeds--->q1--->L1--->q2--->L2
>
> what happens is that when L1 receives a message it sends a number of
> messages to q2.
> I run tests changing the number of messages to be sent to q1 and  
> the number
> of
> messages to be forwarded by L1 to q1 (toSend and toForward  
> respectively)
>
> A first scenario representing a real use case is having toSend  
> small (1-10)
> and toForward
> big(10k-100k)
>
> A first observation is that if using kaha persistence everything  
> eventually
> stops, when the broker
> starts to be too full[2] of messages, having the attached dump.  
> Thus I'm
> using the default
> jdbc persistence.
>
> A general behavior that I've seen is that if toSend is big (10k)  
> then no
> flow control
> seems to take place, as L2 eventually stops receiving messages when  
> the
> broker
> runs out of memory (when not using an usagemanageer), or the heap  
> space is
> exahusted
> (when setting it to something like 1Gb).
> In this case I'd expect that sends to L1 are slowed down, to allow  
> L2 to
> consume the (more) messages
> that are arriving. but this doesn't happen, seeds are kept being  
> sent at the
> same rate.
>
> On the other hand having toSend<<toForward makes all the messages  
> to be
> delivered,
> but after some time everything slows awfully down (1msg/second  
> maybe...),
> mainly because
> the jvm is garbage collecting all the time (the heap space is being  
> all used
> up).
>
>
> Basically I need some way to slow down the first producer so that  
> the whole
> system is not flooded
> with messages that then I can barely consume.
> My question then is, how should I configure the destination and broker
> memory limits, and eventually
> the prefetch values so that I don't either run out of memory or end  
> up with
> a frozen system?
>
> Setting higher memory limits causes the JVM to need a bigger heap  
> space
> (which makes it less stable
> in uncontrolled environments) but keeping them low seems that  
> prevents the
> broker
> to send more messages at some point (again [1]). On the other side,  
> being
> everything in the
> same VM, I don't know if it's better to set prefetch limits higher  
> or lower,
> as pending messages
> have to be stored somewhere, either on the broker or on the  
> consumer...
> Reading about slow consumers[3] doesn't point me to any option, I  
> can't
> discard messaging.
> So I end up considering implementing a sort of timing between  
> sends, or
> waiting for
> message spooling, which I read should come with 4.2...
>
>
> Sorry for the long message, but it's hard to express all of the  
> above, there
> are probably too
> many options to be considered.
> Any help, guideline or hint would be most apreciated, it seems that  
> this
> project is not
> going to be released ever:(
> For the interested is http://www.ripe.net/info/stats/hostcount/ 
> hostcount++/
>
>
> Thanks everybody,
> Francesco
>
> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
> [2]: I haven't got any measure for too full, but I'd say something  
> like 800k
> messages,
> with an heap space of 128Mb
> [3]:  http://activemq.org/site/slow-consumer-handling.html
> http://www.nabble.com/file/5167/kahadump.log kahadump.log
> --
> View this message in context: http://www.nabble.com/Flow-control-on- 
> a-recursive-system%2C-guidelines--tf2894291.html#a8086394
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

Francesco Vivoli
Hi Rob

since this path is pretty critical to my schedule do you maybe know
if message spooling will be in these releases, and if it will be meant
to be usable?
BTW: will they be announced?

Thanks for the reply,
cheers
Francesco

rajdavies wrote
Definitely the best approach would be to use the message spooling -  
which is available in 4.2 (though I'm hoping that the 4.2 release  
will in fact be renamed the 5.0 release).
There should be some milestone releases for 4.2 available in the next  
couple of weeks

cheers,

Rob
On 29 Dec 2006, at 11:45, drvillo wrote:

>
> Hi all
>
> I have been trying to get a stable configuration for over a month  
> now, but
> I keep encountering a variety of issues[1] that reduces everything  
> to an
> unutilizable
> system.
> I have now built a simpler system, which perhaps exposes the same  
> behavior,
> it is just a pipeline of two queues
>
> seeds--->q1--->L1--->q2--->L2
>
> what happens is that when L1 receives a message it sends a number of
> messages to q2.
> I run tests changing the number of messages to be sent to q1 and  
> the number
> of
> messages to be forwarded by L1 to q1 (toSend and toForward  
> respectively)
>
> A first scenario representing a real use case is having toSend  
> small (1-10)
> and toForward
> big(10k-100k)
>
> A first observation is that if using kaha persistence everything  
> eventually
> stops, when the broker
> starts to be too full[2] of messages, having the attached dump.  
> Thus I'm
> using the default
> jdbc persistence.
>
> A general behavior that I've seen is that if toSend is big (10k)  
> then no
> flow control
> seems to take place, as L2 eventually stops receiving messages when  
> the
> broker
> runs out of memory (when not using an usagemanageer), or the heap  
> space is
> exahusted
> (when setting it to something like 1Gb).
> In this case I'd expect that sends to L1 are slowed down, to allow  
> L2 to
> consume the (more) messages
> that are arriving. but this doesn't happen, seeds are kept being  
> sent at the
> same rate.
>
> On the other hand having toSend<<toForward makes all the messages  
> to be
> delivered,
> but after some time everything slows awfully down (1msg/second  
> maybe...),
> mainly because
> the jvm is garbage collecting all the time (the heap space is being  
> all used
> up).
>
>
> Basically I need some way to slow down the first producer so that  
> the whole
> system is not flooded
> with messages that then I can barely consume.
> My question then is, how should I configure the destination and broker
> memory limits, and eventually
> the prefetch values so that I don't either run out of memory or end  
> up with
> a frozen system?
>
> Setting higher memory limits causes the JVM to need a bigger heap  
> space
> (which makes it less stable
> in uncontrolled environments) but keeping them low seems that  
> prevents the
> broker
> to send more messages at some point (again [1]). On the other side,  
> being
> everything in the
> same VM, I don't know if it's better to set prefetch limits higher  
> or lower,
> as pending messages
> have to be stored somewhere, either on the broker or on the  
> consumer...
> Reading about slow consumers[3] doesn't point me to any option, I  
> can't
> discard messaging.
> So I end up considering implementing a sort of timing between  
> sends, or
> waiting for
> message spooling, which I read should come with 4.2...
>
>
> Sorry for the long message, but it's hard to express all of the  
> above, there
> are probably too
> many options to be considered.
> Any help, guideline or hint would be most apreciated, it seems that  
> this
> project is not
> going to be released ever:(
> For the interested is http://www.ripe.net/info/stats/hostcount/ 
> hostcount++/
>
>
> Thanks everybody,
> Francesco
>
> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
> [2]: I haven't got any measure for too full, but I'd say something  
> like 800k
> messages,
> with an heap space of 128Mb
> [3]:  http://activemq.org/site/slow-consumer-handling.html
> http://www.nabble.com/file/5167/kahadump.log kahadump.log
> --
> View this message in context: http://www.nabble.com/Flow-control-on- 
> a-recursive-system%2C-guidelines--tf2894291.html#a8086394
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

rajdavies
Hi Francesco,

spooling is in these releases and will be usable

cheers,

Rob
On 29 Dec 2006, at 13:58, drvillo wrote:

>
> Hi Rob
>
> since this path is pretty critical to my schedule do you maybe know
> if message spooling will be in these releases, and if it will be meant
> to be usable?
> BTW: will they be announced?
>
> Thanks for the reply,
> cheers
> Francesco
>
>
> rajdavies wrote:
>>
>> Definitely the best approach would be to use the message spooling -
>> which is available in 4.2 (though I'm hoping that the 4.2 release
>> will in fact be renamed the 5.0 release).
>> There should be some milestone releases for 4.2 available in the next
>> couple of weeks
>>
>> cheers,
>>
>> Rob
>> On 29 Dec 2006, at 11:45, drvillo wrote:
>>
>>>
>>> Hi all
>>>
>>> I have been trying to get a stable configuration for over a month
>>> now, but
>>> I keep encountering a variety of issues[1] that reduces everything
>>> to an
>>> unutilizable
>>> system.
>>> I have now built a simpler system, which perhaps exposes the same
>>> behavior,
>>> it is just a pipeline of two queues
>>>
>>> seeds--->q1--->L1--->q2--->L2
>>>
>>> what happens is that when L1 receives a message it sends a number of
>>> messages to q2.
>>> I run tests changing the number of messages to be sent to q1 and
>>> the number
>>> of
>>> messages to be forwarded by L1 to q1 (toSend and toForward
>>> respectively)
>>>
>>> A first scenario representing a real use case is having toSend
>>> small (1-10)
>>> and toForward
>>> big(10k-100k)
>>>
>>> A first observation is that if using kaha persistence everything
>>> eventually
>>> stops, when the broker
>>> starts to be too full[2] of messages, having the attached dump.
>>> Thus I'm
>>> using the default
>>> jdbc persistence.
>>>
>>> A general behavior that I've seen is that if toSend is big (10k)
>>> then no
>>> flow control
>>> seems to take place, as L2 eventually stops receiving messages when
>>> the
>>> broker
>>> runs out of memory (when not using an usagemanageer), or the heap
>>> space is
>>> exahusted
>>> (when setting it to something like 1Gb).
>>> In this case I'd expect that sends to L1 are slowed down, to allow
>>> L2 to
>>> consume the (more) messages
>>> that are arriving. but this doesn't happen, seeds are kept being
>>> sent at the
>>> same rate.
>>>
>>> On the other hand having toSend<<toForward makes all the messages
>>> to be
>>> delivered,
>>> but after some time everything slows awfully down (1msg/second
>>> maybe...),
>>> mainly because
>>> the jvm is garbage collecting all the time (the heap space is being
>>> all used
>>> up).
>>>
>>>
>>> Basically I need some way to slow down the first producer so that
>>> the whole
>>> system is not flooded
>>> with messages that then I can barely consume.
>>> My question then is, how should I configure the destination and  
>>> broker
>>> memory limits, and eventually
>>> the prefetch values so that I don't either run out of memory or end
>>> up with
>>> a frozen system?
>>>
>>> Setting higher memory limits causes the JVM to need a bigger heap
>>> space
>>> (which makes it less stable
>>> in uncontrolled environments) but keeping them low seems that
>>> prevents the
>>> broker
>>> to send more messages at some point (again [1]). On the other side,
>>> being
>>> everything in the
>>> same VM, I don't know if it's better to set prefetch limits higher
>>> or lower,
>>> as pending messages
>>> have to be stored somewhere, either on the broker or on the
>>> consumer...
>>> Reading about slow consumers[3] doesn't point me to any option, I
>>> can't
>>> discard messaging.
>>> So I end up considering implementing a sort of timing between
>>> sends, or
>>> waiting for
>>> message spooling, which I read should come with 4.2...
>>>
>>>
>>> Sorry for the long message, but it's hard to express all of the
>>> above, there
>>> are probably too
>>> many options to be considered.
>>> Any help, guideline or hint would be most apreciated, it seems that
>>> this
>>> project is not
>>> going to be released ever:(
>>> For the interested is http://www.ripe.net/info/stats/hostcount/
>>> hostcount++/
>>>
>>>
>>> Thanks everybody,
>>> Francesco
>>>
>>> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
>>> [2]: I haven't got any measure for too full, but I'd say something
>>> like 800k
>>> messages,
>>> with an heap space of 128Mb
>>> [3]:  http://activemq.org/site/slow-consumer-handling.html
>>> http://www.nabble.com/file/5167/kahadump.log kahadump.log
>>> --
>>> View this message in context: http://www.nabble.com/Flow-control-on-
>>> a-recursive-system%2C-guidelines--tf2894291.html#a8086394
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>
>>
>>
>
> --
> View this message in context: http://www.nabble.com/Flow-control-on- 
> a-recursive-system%2C-guidelines--tf2894291.html#a8087559
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

dkfn
In reply to this post by Francesco Vivoli
Hi Francesco,

I might not be understanding your problem correctly but there should
be some solutions available if you can't move to 4.2 straight away:

1. use consumer.receive instead of a messagelistener on L1 so that you
can explicitly control the volume of messages that it passes on to q2.
you can heuristically throttle it with a Thread.sleep between
consumption of a message and sending it to q2.

2. if the heuristic method doesn't work (because you're not maximising
throughput), have L2 inform L1 when it's finished processing messages.

3. Use the JMX api's to find out how many messages are in q2 to throttle L1

If you want more consumers on L1, use something like the Executors from Java 5.0

?

cheers,
j.


On 12/29/06, drvillo <[hidden email]> wrote:

>
> Hi all
>
> I have been trying to get a stable configuration for over a month now, but
> I keep encountering a variety of issues[1] that reduces everything to an
> unutilizable
> system.
> I have now built a simpler system, which perhaps exposes the same behavior,
> it is just a pipeline of two queues
>
> seeds--->q1--->L1--->q2--->L2
>
> what happens is that when L1 receives a message it sends a number of
> messages to q2.
> I run tests changing the number of messages to be sent to q1 and the number
> of
> messages to be forwarded by L1 to q1 (toSend and toForward respectively)
>
> A first scenario representing a real use case is having toSend small (1-10)
> and toForward
> big(10k-100k)
>
> A first observation is that if using kaha persistence everything eventually
> stops, when the broker
> starts to be too full[2] of messages, having the attached dump. Thus I'm
> using the default
> jdbc persistence.
>
> A general behavior that I've seen is that if toSend is big (10k) then no
> flow control
> seems to take place, as L2 eventually stops receiving messages when the
> broker
> runs out of memory (when not using an usagemanageer), or the heap space is
> exahusted
> (when setting it to something like 1Gb).
> In this case I'd expect that sends to L1 are slowed down, to allow L2 to
> consume the (more) messages
> that are arriving. but this doesn't happen, seeds are kept being sent at the
> same rate.
>
> On the other hand having toSend<<toForward makes all the messages to be
> delivered,
> but after some time everything slows awfully down (1msg/second maybe...),
> mainly because
> the jvm is garbage collecting all the time (the heap space is being all used
> up).
>
>
> Basically I need some way to slow down the first producer so that the whole
> system is not flooded
> with messages that then I can barely consume.
> My question then is, how should I configure the destination and broker
> memory limits, and eventually
> the prefetch values so that I don't either run out of memory or end up with
> a frozen system?
>
> Setting higher memory limits causes the JVM to need a bigger heap space
> (which makes it less stable
> in uncontrolled environments) but keeping them low seems that prevents the
> broker
> to send more messages at some point (again [1]). On the other side, being
> everything in the
> same VM, I don't know if it's better to set prefetch limits higher or lower,
> as pending messages
> have to be stored somewhere, either on the broker or on the consumer...
> Reading about slow consumers[3] doesn't point me to any option, I can't
> discard messaging.
> So I end up considering implementing a sort of timing between sends, or
> waiting for
> message spooling, which I read should come with 4.2...
>
>
> Sorry for the long message, but it's hard to express all of the above, there
> are probably too
> many options to be considered.
> Any help, guideline or hint would be most apreciated, it seems that this
> project is not
> going to be released ever:(
> For the interested is http://www.ripe.net/info/stats/hostcount/hostcount++/
>
>
> Thanks everybody,
> Francesco
>
> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
> [2]: I haven't got any measure for too full, but I'd say something like 800k
> messages,
> with an heap space of 128Mb
> [3]:  http://activemq.org/site/slow-consumer-handling.html
> http://www.nabble.com/file/5167/kahadump.log kahadump.log
> --
> View this message in context: http://www.nabble.com/Flow-control-on-a-recursive-system%2C-guidelines--tf2894291.html#a8086394
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

Francesco Vivoli
Hi

thanks a lot for the hints, a few follow ups:

1. Why is this different than calling Thread.sleep within an onMessage()?
Anyway yes, this is something that I have thought of.

2. well the problem is that the system I depicted is a stripped down version
of the one I work on. This is really a recursive system, where L1 is
the last stage of a longer pipeline and L2 is the head of such pipeline.
Thus L2 causes more messages to arrive to L1 eventually, and it's possible
that they do so before the first L1.onMessage() finishes sending messages
back to L2.


3. yes, this is something that would help, but given the rate of message arrival
wouldn't I insert too much overhead?


About the executors yes, but isn't this supposed to be handled by jencks if I
specify a pool of consumers?

Thanks again
Cheers
Francesco

dkfn wrote
Hi Francesco,

I might not be understanding your problem correctly but there should
be some solutions available if you can't move to 4.2 straight away:

1. use consumer.receive instead of a messagelistener on L1 so that you
can explicitly control the volume of messages that it passes on to q2.
you can heuristically throttle it with a Thread.sleep between
consumption of a message and sending it to q2.

2. if the heuristic method doesn't work (because you're not maximising
throughput), have L2 inform L1 when it's finished processing messages.

3. Use the JMX api's to find out how many messages are in q2 to throttle L1

If you want more consumers on L1, use something like the Executors from Java 5.0

?

cheers,
j.


On 12/29/06, drvillo <f.vivoli@gmail.com> wrote:
>
> Hi all
>
> I have been trying to get a stable configuration for over a month now, but
> I keep encountering a variety of issues[1] that reduces everything to an
> unutilizable
> system.
> I have now built a simpler system, which perhaps exposes the same behavior,
> it is just a pipeline of two queues
>
> seeds--->q1--->L1--->q2--->L2
>
> what happens is that when L1 receives a message it sends a number of
> messages to q2.
> I run tests changing the number of messages to be sent to q1 and the number
> of
> messages to be forwarded by L1 to q1 (toSend and toForward respectively)
>
> A first scenario representing a real use case is having toSend small (1-10)
> and toForward
> big(10k-100k)
>
> A first observation is that if using kaha persistence everything eventually
> stops, when the broker
> starts to be too full[2] of messages, having the attached dump. Thus I'm
> using the default
> jdbc persistence.
>
> A general behavior that I've seen is that if toSend is big (10k) then no
> flow control
> seems to take place, as L2 eventually stops receiving messages when the
> broker
> runs out of memory (when not using an usagemanageer), or the heap space is
> exahusted
> (when setting it to something like 1Gb).
> In this case I'd expect that sends to L1 are slowed down, to allow L2 to
> consume the (more) messages
> that are arriving. but this doesn't happen, seeds are kept being sent at the
> same rate.
>
> On the other hand having toSend<<toForward makes all the messages to be
> delivered,
> but after some time everything slows awfully down (1msg/second maybe...),
> mainly because
> the jvm is garbage collecting all the time (the heap space is being all used
> up).
>
>
> Basically I need some way to slow down the first producer so that the whole
> system is not flooded
> with messages that then I can barely consume.
> My question then is, how should I configure the destination and broker
> memory limits, and eventually
> the prefetch values so that I don't either run out of memory or end up with
> a frozen system?
>
> Setting higher memory limits causes the JVM to need a bigger heap space
> (which makes it less stable
> in uncontrolled environments) but keeping them low seems that prevents the
> broker
> to send more messages at some point (again [1]). On the other side, being
> everything in the
> same VM, I don't know if it's better to set prefetch limits higher or lower,
> as pending messages
> have to be stored somewhere, either on the broker or on the consumer...
> Reading about slow consumers[3] doesn't point me to any option, I can't
> discard messaging.
> So I end up considering implementing a sort of timing between sends, or
> waiting for
> message spooling, which I read should come with 4.2...
>
>
> Sorry for the long message, but it's hard to express all of the above, there
> are probably too
> many options to be considered.
> Any help, guideline or hint would be most apreciated, it seems that this
> project is not
> going to be released ever:(
> For the interested is http://www.ripe.net/info/stats/hostcount/hostcount++/
>
>
> Thanks everybody,
> Francesco
>
> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
> [2]: I haven't got any measure for too full, but I'd say something like 800k
> messages,
> with an heap space of 128Mb
> [3]:  http://activemq.org/site/slow-consumer-handling.html
> http://www.nabble.com/file/5167/kahadump.log kahadump.log
> --
> View this message in context: http://www.nabble.com/Flow-control-on-a-recursive-system%2C-guidelines--tf2894291.html#a8086394
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

dkfn
hi Francesco,

On 1/1/07, drvillo <[hidden email]> wrote:
>
> Hi
>
> thanks a lot for the hints, a few follow ups:
>
> 1. Why is this different than calling Thread.sleep within an onMessage()?
> Anyway yes, this is something that I have thought of.

Yup, that's pretty much it. But it is a sledgehammer.

> 2. well the problem is that the system I depicted is a stripped down version
> of the one I work on. This is really a recursive system, where L1 is
> the last stage of a longer pipeline and L2 is the head of such pipeline.
> Thus L2 causes more messages to arrive to L1 eventually, and it's possible
> that they do so before the first L1.onMessage() finishes sending messages
> back to L2.

Yipe. You could want something like a compute server. You could do it
with a single compute queue but it'll double the number of hops in
your pipeline. (i.e. seed --> compute queue --> C --> q1 --> L1 -->
compute queue --> C --> q2 --> L2 etc.).

> 3. yes, this is something that would help, but given the rate of message
> arrival
> wouldn't I insert too much overhead?

You could sample them at whatever interval you prefer.

> About the executors yes, but isn't this supposed to be handled by jencks if
> I
> specify a pool of consumers?

Ah, yup. Jencks will do that job for you.

> Thanks again
> Cheers
> Francesco
>
>
> dkfn wrote:
> >
> > Hi Francesco,
> >
> > I might not be understanding your problem correctly but there should
> > be some solutions available if you can't move to 4.2 straight away:
> >
> > 1. use consumer.receive instead of a messagelistener on L1 so that you
> > can explicitly control the volume of messages that it passes on to q2.
> > you can heuristically throttle it with a Thread.sleep between
> > consumption of a message and sending it to q2.
> >
> > 2. if the heuristic method doesn't work (because you're not maximising
> > throughput), have L2 inform L1 when it's finished processing messages.
> >
> > 3. Use the JMX api's to find out how many messages are in q2 to throttle
> > L1
> >
> > If you want more consumers on L1, use something like the Executors from
> > Java 5.0
> >
> > ?
> >
> > cheers,
> > j.
> >
> >
> > On 12/29/06, drvillo <[hidden email]> wrote:
> >>
> >> Hi all
> >>
> >> I have been trying to get a stable configuration for over a month now,
> >> but
> >> I keep encountering a variety of issues[1] that reduces everything to an
> >> unutilizable
> >> system.
> >> I have now built a simpler system, which perhaps exposes the same
> >> behavior,
> >> it is just a pipeline of two queues
> >>
> >> seeds--->q1--->L1--->q2--->L2
> >>
> >> what happens is that when L1 receives a message it sends a number of
> >> messages to q2.
> >> I run tests changing the number of messages to be sent to q1 and the
> >> number
> >> of
> >> messages to be forwarded by L1 to q1 (toSend and toForward respectively)
> >>
> >> A first scenario representing a real use case is having toSend small
> >> (1-10)
> >> and toForward
> >> big(10k-100k)
> >>
> >> A first observation is that if using kaha persistence everything
> >> eventually
> >> stops, when the broker
> >> starts to be too full[2] of messages, having the attached dump. Thus I'm
> >> using the default
> >> jdbc persistence.
> >>
> >> A general behavior that I've seen is that if toSend is big (10k) then no
> >> flow control
> >> seems to take place, as L2 eventually stops receiving messages when the
> >> broker
> >> runs out of memory (when not using an usagemanageer), or the heap space
> >> is
> >> exahusted
> >> (when setting it to something like 1Gb).
> >> In this case I'd expect that sends to L1 are slowed down, to allow L2 to
> >> consume the (more) messages
> >> that are arriving. but this doesn't happen, seeds are kept being sent at
> >> the
> >> same rate.
> >>
> >> On the other hand having toSend<<toForward makes all the messages to be
> >> delivered,
> >> but after some time everything slows awfully down (1msg/second maybe...),
> >> mainly because
> >> the jvm is garbage collecting all the time (the heap space is being all
> >> used
> >> up).
> >>
> >>
> >> Basically I need some way to slow down the first producer so that the
> >> whole
> >> system is not flooded
> >> with messages that then I can barely consume.
> >> My question then is, how should I configure the destination and broker
> >> memory limits, and eventually
> >> the prefetch values so that I don't either run out of memory or end up
> >> with
> >> a frozen system?
> >>
> >> Setting higher memory limits causes the JVM to need a bigger heap space
> >> (which makes it less stable
> >> in uncontrolled environments) but keeping them low seems that prevents
> >> the
> >> broker
> >> to send more messages at some point (again [1]). On the other side, being
> >> everything in the
> >> same VM, I don't know if it's better to set prefetch limits higher or
> >> lower,
> >> as pending messages
> >> have to be stored somewhere, either on the broker or on the consumer...
> >> Reading about slow consumers[3] doesn't point me to any option, I can't
> >> discard messaging.
> >> So I end up considering implementing a sort of timing between sends, or
> >> waiting for
> >> message spooling, which I read should come with 4.2...
> >>
> >>
> >> Sorry for the long message, but it's hard to express all of the above,
> >> there
> >> are probably too
> >> many options to be considered.
> >> Any help, guideline or hint would be most apreciated, it seems that this
> >> project is not
> >> going to be released ever:(
> >> For the interested is
> >> http://www.ripe.net/info/stats/hostcount/hostcount++/
> >>
> >>
> >> Thanks everybody,
> >> Francesco
> >>
> >> [1]: http://www.nabble.com/Consumer-Deadlock-tf2014492.html#a5536571
> >> [2]: I haven't got any measure for too full, but I'd say something like
> >> 800k
> >> messages,
> >> with an heap space of 128Mb
> >> [3]:  http://activemq.org/site/slow-consumer-handling.html
> >> http://www.nabble.com/file/5167/kahadump.log kahadump.log
> >> --
> >> View this message in context:
> >> http://www.nabble.com/Flow-control-on-a-recursive-system%2C-guidelines--tf2894291.html#a8086394
> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context: http://www.nabble.com/Flow-control-on-a-recursive-system%2C-guidelines--tf2894291.html#a8116126
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

Francesco Vivoli
Hi
> 1. Why is this different than calling Thread.sleep within an onMessage()?
> Anyway yes, this is something that I have thought of.

Yup, that's pretty much it. But it is a sledgehammer.
yes, actually this was just to see if timing could help:)


> 2.
Yipe. You could want something like a compute server. You could do it
with a single compute queue but it'll double the number of hops in
your pipeline. (i.e. seed --> compute queue --> C --> q1 --> L1 -->
compute queue --> C --> q2 --> L2 etc.).
Mmhh, actually I'm not sure I understand what you mean...The flow of messages
would be the same, what would the compute server actually do?

> 3. yes, this is something that would help, but given the rate of message
> arrival
> wouldn't I insert too much overhead?

You could sample them at whatever interval you prefer.
well, I'd have to do it every time I want to enqueue something, but probably
there's some heuristic to be used.


Thanks, again:)
cheers
Francesco
Reply | Threaded
Open this post in threaded view
|

Re: Flow control on a recursive system, guidelines?

dkfn
On 1/3/07, drvillo <[hidden email]> wrote:

>
> Hi
>
>
> >> 1. Why is this different than calling Thread.sleep within an onMessage()?
> >> Anyway yes, this is something that I have thought of.
> >
> > Yup, that's pretty much it. But it is a sledgehammer.
> >
> yes, actually this was just to see if timing could help:)
>
>
>
>
> >> 2.
> > Yipe. You could want something like a compute server. You could do it
> > with a single compute queue but it'll double the number of hops in
> > your pipeline. (i.e. seed --> compute queue --> C --> q1 --> L1 -->
> > compute queue --> C --> q2 --> L2 etc.).
> >
>
> Mmhh, actually I'm not sure I understand what you mean...The flow of
> messages
> would be the same, what would the compute server actually do?

Ah, yes. I was thinking about something like this:
http://www.artima.com/jini/jiniology/js2.html but done with JMS but on
reflection it would be a pretty invasive change.

> >> 3. yes, this is something that would help, but given the rate of message
> >> arrival
> >> wouldn't I insert too much overhead?
> >
> > You could sample them at whatever interval you prefer.
> >
> well, I'd have to do it every time I want to enqueue something, but probably
> there's some heuristic to be used.
>
> Thanks, again:)
> cheers
> Francesco
>
> --
> View this message in context: http://www.nabble.com/Flow-control-on-a-recursive-system%2C-guidelines--tf2894291.html#a8141203
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>