[DISCUSS] Use pooled buffers on message body

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

[DISCUSS] Use pooled buffers on message body

clebertsuconic
One thing I couldn't do before without some proper thinking was to use
a Pooled Buffer on the message bodies.

It would actually rock out the perf numbers if that could be achieved...


I'm thinking this should be done on the server only. Doing it on the
client would mean to give some API to users to tell when the message
is gone and no longer needed.. I don't think we can do this with JMS
core, or any of the qpid clients... although we could think about an
API in the future for such thing.



For the server: I would need to capture when the message is released..
the only pitfal for this would be paging as the Page read may come and
go... So, this will involve some work on making sure we would call the
proper places.


We would still need to copy from Netty Buffer into another
PooledBuffer as the Netty buffer would need to be a Native buffer
while the message a regular Buffer (non Native).


I am thinking of investing my time on this (even if my spare time if
needed be) after apache con next week.


This will certainly attract Francesco and Michael Pierce's attention..
but this would be a pretty good improvement towards even less GC
pressure.





--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

MichaelAndrePearce

Hi Clebert.

+1 from me definitely.

Agreed this def should target the server not the clients.

Having the message / buffer used my message pooled would be great, as will reduce GC pressure.

I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.

I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.

Cheers
Mike



> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>
> One thing I couldn't do before without some proper thinking was to use
> a Pooled Buffer on the message bodies.
>
> It would actually rock out the perf numbers if that could be achieved...
>
>
> I'm thinking this should be done on the server only. Doing it on the
> client would mean to give some API to users to tell when the message
> is gone and no longer needed.. I don't think we can do this with JMS
> core, or any of the qpid clients... although we could think about an
> API in the future for such thing.
>
>
>
> For the server: I would need to capture when the message is released..
> the only pitfal for this would be paging as the Page read may come and
> go... So, this will involve some work on making sure we would call the
> proper places.
>
>
> We would still need to copy from Netty Buffer into another
> PooledBuffer as the Netty buffer would need to be a Native buffer
> while the message a regular Buffer (non Native).
>
>
> I am thinking of investing my time on this (even if my spare time if
> needed be) after apache con next week.
>
>
> This will certainly attract Francesco and Michael Pierce's attention..
> but this would be a pretty good improvement towards even less GC
> pressure.
>
>
>
>
>
> --
> Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

nigro_franz
In reply to this post by clebertsuconic
HI Clebert!!
+1 from me too!!
Agree with all the things stated by Michael too about reducing latencies.
This is something that worth to be addressed and I'll give my contribute on it with pleasure :)

Franz
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
In reply to this post by MichaelAndrePearce
I'm not sure we can keep the message body as a native buffer...

I have seen it being expensive. Especially when dealing with
clustering and paging.. a lot of times I have seen memory exaustion...

for AMQP, on qpid Proton though.. that would require a lot more
changes.. it's not even possible to think about it now  unless we make
substantial changes to Proton.. Proton likes to keep its own internal
pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
would like to though).




But I'm always in advocating of tackling one thing at the time...
first thing is to have some reference counting in place to tell us
when to deallocate the memory used by the message, in such way it
works with both paging and non paging... anything else then will be
"relatively' easier.



On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
<[hidden email]> wrote:

>
> Hi Clebert.
>
> +1 from me definitely.
>
> Agreed this def should target the server not the clients.
>
> Having the message / buffer used my message pooled would be great, as will reduce GC pressure.
>
> I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.
>
> I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.
>
> Cheers
> Mike
>
>
>
>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>>
>> One thing I couldn't do before without some proper thinking was to use
>> a Pooled Buffer on the message bodies.
>>
>> It would actually rock out the perf numbers if that could be achieved...
>>
>>
>> I'm thinking this should be done on the server only. Doing it on the
>> client would mean to give some API to users to tell when the message
>> is gone and no longer needed.. I don't think we can do this with JMS
>> core, or any of the qpid clients... although we could think about an
>> API in the future for such thing.
>>
>>
>>
>> For the server: I would need to capture when the message is released..
>> the only pitfal for this would be paging as the Page read may come and
>> go... So, this will involve some work on making sure we would call the
>> proper places.
>>
>>
>> We would still need to copy from Netty Buffer into another
>> PooledBuffer as the Netty buffer would need to be a Native buffer
>> while the message a regular Buffer (non Native).
>>
>>
>> I am thinking of investing my time on this (even if my spare time if
>> needed be) after apache con next week.
>>
>>
>> This will certainly attract Francesco and Michael Pierce's attention..
>> but this would be a pretty good improvement towards even less GC
>> pressure.
>>
>>
>>
>>
>>
>> --
>> Clebert Suconic



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

MichaelAndrePearce
I agree iterative targeted steps is best.

So if even just pooling messages and keep the copying of the buffer as today it's a step in the right direction.


Sent from my iPhone

> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]> wrote:
>
> I'm not sure we can keep the message body as a native buffer...
>
> I have seen it being expensive. Especially when dealing with
> clustering and paging.. a lot of times I have seen memory exaustion...
>
> for AMQP, on qpid Proton though.. that would require a lot more
> changes.. it's not even possible to think about it now  unless we make
> substantial changes to Proton.. Proton likes to keep its own internal
> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
> would like to though).
>
>
>
>
> But I'm always in advocating of tackling one thing at the time...
> first thing is to have some reference counting in place to tell us
> when to deallocate the memory used by the message, in such way it
> works with both paging and non paging... anything else then will be
> "relatively' easier.
>
>
>
> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
> <[hidden email]> wrote:
>>
>> Hi Clebert.
>>
>> +1 from me definitely.
>>
>> Agreed this def should target the server not the clients.
>>
>> Having the message / buffer used my message pooled would be great, as will reduce GC pressure.
>>
>> I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.
>>
>> I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.
>>
>> Cheers
>> Mike
>>
>>
>>
>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>>>
>>> One thing I couldn't do before without some proper thinking was to use
>>> a Pooled Buffer on the message bodies.
>>>
>>> It would actually rock out the perf numbers if that could be achieved...
>>>
>>>
>>> I'm thinking this should be done on the server only. Doing it on the
>>> client would mean to give some API to users to tell when the message
>>> is gone and no longer needed.. I don't think we can do this with JMS
>>> core, or any of the qpid clients... although we could think about an
>>> API in the future for such thing.
>>>
>>>
>>>
>>> For the server: I would need to capture when the message is released..
>>> the only pitfal for this would be paging as the Page read may come and
>>> go... So, this will involve some work on making sure we would call the
>>> proper places.
>>>
>>>
>>> We would still need to copy from Netty Buffer into another
>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>>> while the message a regular Buffer (non Native).
>>>
>>>
>>> I am thinking of investing my time on this (even if my spare time if
>>> needed be) after apache con next week.
>>>
>>>
>>> This will certainly attract Francesco and Michael Pierce's attention..
>>> but this would be a pretty good improvement towards even less GC
>>> pressure.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Clebert Suconic
>
>
>
> --
> Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

Matt Pavlovich-2
+1 this all sounds great

> On May 12, 2017, at 12:02 PM, Michael André Pearce <[hidden email]> wrote:
>
> I agree iterative targeted steps is best.
>
> So if even just pooling messages and keep the copying of the buffer as today it's a step in the right direction.
>
>
> Sent from my iPhone
>
>> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]> wrote:
>>
>> I'm not sure we can keep the message body as a native buffer...
>>
>> I have seen it being expensive. Especially when dealing with
>> clustering and paging.. a lot of times I have seen memory exaustion...
>>
>> for AMQP, on qpid Proton though.. that would require a lot more
>> changes.. it's not even possible to think about it now  unless we make
>> substantial changes to Proton.. Proton likes to keep its own internal
>> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
>> would like to though).
>>
>>
>>
>>
>> But I'm always in advocating of tackling one thing at the time...
>> first thing is to have some reference counting in place to tell us
>> when to deallocate the memory used by the message, in such way it
>> works with both paging and non paging... anything else then will be
>> "relatively' easier.
>>
>>
>>
>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
>> <[hidden email]> wrote:
>>>
>>> Hi Clebert.
>>>
>>> +1 from me definitely.
>>>
>>> Agreed this def should target the server not the clients.
>>>
>>> Having the message / buffer used my message pooled would be great, as will reduce GC pressure.
>>>
>>> I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.
>>>
>>> I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.
>>>
>>> Cheers
>>> Mike
>>>
>>>
>>>
>>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>>>>
>>>> One thing I couldn't do before without some proper thinking was to use
>>>> a Pooled Buffer on the message bodies.
>>>>
>>>> It would actually rock out the perf numbers if that could be achieved...
>>>>
>>>>
>>>> I'm thinking this should be done on the server only. Doing it on the
>>>> client would mean to give some API to users to tell when the message
>>>> is gone and no longer needed.. I don't think we can do this with JMS
>>>> core, or any of the qpid clients... although we could think about an
>>>> API in the future for such thing.
>>>>
>>>>
>>>>
>>>> For the server: I would need to capture when the message is released..
>>>> the only pitfal for this would be paging as the Page read may come and
>>>> go... So, this will involve some work on making sure we would call the
>>>> proper places.
>>>>
>>>>
>>>> We would still need to copy from Netty Buffer into another
>>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>>>> while the message a regular Buffer (non Native).
>>>>
>>>>
>>>> I am thinking of investing my time on this (even if my spare time if
>>>> needed be) after apache con next week.
>>>>
>>>>
>>>> This will certainly attract Francesco and Michael Pierce's attention..
>>>> but this would be a pretty good improvement towards even less GC
>>>> pressure.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Clebert Suconic
>>
>>
>>
>> --
>> Clebert Suconic

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

Martyn Taylor
We've had using buffer pools throughout on the backlog for a long time, so
+1 on this.  The only thing I'd say here is that retrofitting the reference
counting (i.e. releasing the buffers) can sometimes lead to leaks, if we
don't catch all cases, so we just need to be careful here.

One other thing to consider, we do have users that run Artemis in
constrained environments, where memory is limited.  Allocating a chunk of
memory upfront for the buffers may not be ideal for that use case.

Cheers

On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]> wrote:

> +1 this all sounds great
>
> > On May 12, 2017, at 12:02 PM, Michael André Pearce <
> [hidden email]> wrote:
> >
> > I agree iterative targeted steps is best.
> >
> > So if even just pooling messages and keep the copying of the buffer as
> today it's a step in the right direction.
> >
> >
> > Sent from my iPhone
> >
> >> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]>
> wrote:
> >>
> >> I'm not sure we can keep the message body as a native buffer...
> >>
> >> I have seen it being expensive. Especially when dealing with
> >> clustering and paging.. a lot of times I have seen memory exaustion...
> >>
> >> for AMQP, on qpid Proton though.. that would require a lot more
> >> changes.. it's not even possible to think about it now  unless we make
> >> substantial changes to Proton.. Proton likes to keep its own internal
> >> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
> >> would like to though).
> >>
> >>
> >>
> >>
> >> But I'm always in advocating of tackling one thing at the time...
> >> first thing is to have some reference counting in place to tell us
> >> when to deallocate the memory used by the message, in such way it
> >> works with both paging and non paging... anything else then will be
> >> "relatively' easier.
> >>
> >>
> >>
> >> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
> >> <[hidden email]> wrote:
> >>>
> >>> Hi Clebert.
> >>>
> >>> +1 from me definitely.
> >>>
> >>> Agreed this def should target the server not the clients.
> >>>
> >>> Having the message / buffer used my message pooled would be great, as
> will reduce GC pressure.
> >>>
> >>> I would like to take that one step further and question if we could
> actually avoid copying the buffer contents at all on passing from/to netty.
> The zero-copy nivana.
> >>>
> >>> I know you state to have separate buffer pools. But if we could use
> the same memory address we can avoid the copy, reducing latency also. This
> could be done by sharing the buffer and the pool, or by using
> slice/duplicate retained.
> >>>
> >>> Cheers
> >>> Mike
> >>>
> >>>
> >>>
> >>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]>
> wrote:
> >>>>
> >>>> One thing I couldn't do before without some proper thinking was to use
> >>>> a Pooled Buffer on the message bodies.
> >>>>
> >>>> It would actually rock out the perf numbers if that could be
> achieved...
> >>>>
> >>>>
> >>>> I'm thinking this should be done on the server only. Doing it on the
> >>>> client would mean to give some API to users to tell when the message
> >>>> is gone and no longer needed.. I don't think we can do this with JMS
> >>>> core, or any of the qpid clients... although we could think about an
> >>>> API in the future for such thing.
> >>>>
> >>>>
> >>>>
> >>>> For the server: I would need to capture when the message is released..
> >>>> the only pitfal for this would be paging as the Page read may come and
> >>>> go... So, this will involve some work on making sure we would call the
> >>>> proper places.
> >>>>
> >>>>
> >>>> We would still need to copy from Netty Buffer into another
> >>>> PooledBuffer as the Netty buffer would need to be a Native buffer
> >>>> while the message a regular Buffer (non Native).
> >>>>
> >>>>
> >>>> I am thinking of investing my time on this (even if my spare time if
> >>>> needed be) after apache con next week.
> >>>>
> >>>>
> >>>> This will certainly attract Francesco and Michael Pierce's attention..
> >>>> but this would be a pretty good improvement towards even less GC
> >>>> pressure.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Clebert Suconic
> >>
> >>
> >>
> >> --
> >> Clebert Suconic
>
>
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
Perhaps we need a place to set the allocator.. Pooled versus Unpooled..


PooledRepository.getPool()...



Regarding the ref counts.. we will need to add a new reference
counting.. the current one is a bit complex to be used because of
delivering.. DLQs.. etc... it's a big challenge for sure!

On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <[hidden email]> wrote:

> We've had using buffer pools throughout on the backlog for a long time, so
> +1 on this.  The only thing I'd say here is that retrofitting the reference
> counting (i.e. releasing the buffers) can sometimes lead to leaks, if we
> don't catch all cases, so we just need to be careful here.
>
> One other thing to consider, we do have users that run Artemis in
> constrained environments, where memory is limited.  Allocating a chunk of
> memory upfront for the buffers may not be ideal for that use case.
>
> Cheers
>
> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]> wrote:
>
>> +1 this all sounds great
>>
>> > On May 12, 2017, at 12:02 PM, Michael André Pearce <
>> [hidden email]> wrote:
>> >
>> > I agree iterative targeted steps is best.
>> >
>> > So if even just pooling messages and keep the copying of the buffer as
>> today it's a step in the right direction.
>> >
>> >
>> > Sent from my iPhone
>> >
>> >> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]>
>> wrote:
>> >>
>> >> I'm not sure we can keep the message body as a native buffer...
>> >>
>> >> I have seen it being expensive. Especially when dealing with
>> >> clustering and paging.. a lot of times I have seen memory exaustion...
>> >>
>> >> for AMQP, on qpid Proton though.. that would require a lot more
>> >> changes.. it's not even possible to think about it now  unless we make
>> >> substantial changes to Proton.. Proton likes to keep its own internal
>> >> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
>> >> would like to though).
>> >>
>> >>
>> >>
>> >>
>> >> But I'm always in advocating of tackling one thing at the time...
>> >> first thing is to have some reference counting in place to tell us
>> >> when to deallocate the memory used by the message, in such way it
>> >> works with both paging and non paging... anything else then will be
>> >> "relatively' easier.
>> >>
>> >>
>> >>
>> >> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
>> >> <[hidden email]> wrote:
>> >>>
>> >>> Hi Clebert.
>> >>>
>> >>> +1 from me definitely.
>> >>>
>> >>> Agreed this def should target the server not the clients.
>> >>>
>> >>> Having the message / buffer used my message pooled would be great, as
>> will reduce GC pressure.
>> >>>
>> >>> I would like to take that one step further and question if we could
>> actually avoid copying the buffer contents at all on passing from/to netty.
>> The zero-copy nivana.
>> >>>
>> >>> I know you state to have separate buffer pools. But if we could use
>> the same memory address we can avoid the copy, reducing latency also. This
>> could be done by sharing the buffer and the pool, or by using
>> slice/duplicate retained.
>> >>>
>> >>> Cheers
>> >>> Mike
>> >>>
>> >>>
>> >>>
>> >>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]>
>> wrote:
>> >>>>
>> >>>> One thing I couldn't do before without some proper thinking was to use
>> >>>> a Pooled Buffer on the message bodies.
>> >>>>
>> >>>> It would actually rock out the perf numbers if that could be
>> achieved...
>> >>>>
>> >>>>
>> >>>> I'm thinking this should be done on the server only. Doing it on the
>> >>>> client would mean to give some API to users to tell when the message
>> >>>> is gone and no longer needed.. I don't think we can do this with JMS
>> >>>> core, or any of the qpid clients... although we could think about an
>> >>>> API in the future for such thing.
>> >>>>
>> >>>>
>> >>>>
>> >>>> For the server: I would need to capture when the message is released..
>> >>>> the only pitfal for this would be paging as the Page read may come and
>> >>>> go... So, this will involve some work on making sure we would call the
>> >>>> proper places.
>> >>>>
>> >>>>
>> >>>> We would still need to copy from Netty Buffer into another
>> >>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>> >>>> while the message a regular Buffer (non Native).
>> >>>>
>> >>>>
>> >>>> I am thinking of investing my time on this (even if my spare time if
>> >>>> needed be) after apache con next week.
>> >>>>
>> >>>>
>> >>>> This will certainly attract Francesco and Michael Pierce's attention..
>> >>>> but this would be a pretty good improvement towards even less GC
>> >>>> pressure.
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Clebert Suconic
>> >>
>> >>
>> >>
>> >> --
>> >> Clebert Suconic
>>
>>



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

nigro_franz
I'm checking the Netty doc on ref counting to understand better this: http://netty.io/wiki/reference-counted-objects.html#who-destroys-it
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
The issue is with Paging.. there's a LRU for last accessed messages...

when the message is sending we ++... after sent... --... so the
receiving part is supposed to destroy it from what I read on that doc.

On Fri, May 26, 2017 at 10:38 AM, nigro_franz <[hidden email]> wrote:
> I'm checking the Netty doc on ref counting to understand better this:
> http://netty.io/wiki/reference-counted-objects.html#who-destroys-it
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Use-pooled-buffers-on-message-body-tp4726006p4726614.html
> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

Matt Pavlovich-2
In reply to this post by clebertsuconic
+1 having the memory pool/allocator be a configurable strategy or policy-type deal would be bonus level 12. Esp. for embedded / kiosk / Raspberry Pi and linux host container scenarios as Martyn mentioned.

> On May 26, 2017, at 9:50 AM, Clebert Suconic <[hidden email]> wrote:
>
> Perhaps we need a place to set the allocator.. Pooled versus Unpooled..
>
>
> PooledRepository.getPool()...
>
>
>
> Regarding the ref counts.. we will need to add a new reference
> counting.. the current one is a bit complex to be used because of
> delivering.. DLQs.. etc... it's a big challenge for sure!
>
> On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <[hidden email]> wrote:
>> We've had using buffer pools throughout on the backlog for a long time, so
>> +1 on this.  The only thing I'd say here is that retrofitting the reference
>> counting (i.e. releasing the buffers) can sometimes lead to leaks, if we
>> don't catch all cases, so we just need to be careful here.
>>
>> One other thing to consider, we do have users that run Artemis in
>> constrained environments, where memory is limited.  Allocating a chunk of
>> memory upfront for the buffers may not be ideal for that use case.
>>
>> Cheers
>>
>> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]> wrote:
>>
>>> +1 this all sounds great
>>>
>>>> On May 12, 2017, at 12:02 PM, Michael André Pearce <
>>> [hidden email]> wrote:
>>>>
>>>> I agree iterative targeted steps is best.
>>>>
>>>> So if even just pooling messages and keep the copying of the buffer as
>>> today it's a step in the right direction.
>>>>
>>>>
>>>> Sent from my iPhone
>>>>
>>>>> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]>
>>> wrote:
>>>>>
>>>>> I'm not sure we can keep the message body as a native buffer...
>>>>>
>>>>> I have seen it being expensive. Especially when dealing with
>>>>> clustering and paging.. a lot of times I have seen memory exaustion...
>>>>>
>>>>> for AMQP, on qpid Proton though.. that would require a lot more
>>>>> changes.. it's not even possible to think about it now  unless we make
>>>>> substantial changes to Proton.. Proton likes to keep its own internal
>>>>> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
>>>>> would like to though).
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> But I'm always in advocating of tackling one thing at the time...
>>>>> first thing is to have some reference counting in place to tell us
>>>>> when to deallocate the memory used by the message, in such way it
>>>>> works with both paging and non paging... anything else then will be
>>>>> "relatively' easier.
>>>>>
>>>>>
>>>>>
>>>>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
>>>>> <[hidden email]> wrote:
>>>>>>
>>>>>> Hi Clebert.
>>>>>>
>>>>>> +1 from me definitely.
>>>>>>
>>>>>> Agreed this def should target the server not the clients.
>>>>>>
>>>>>> Having the message / buffer used my message pooled would be great, as
>>> will reduce GC pressure.
>>>>>>
>>>>>> I would like to take that one step further and question if we could
>>> actually avoid copying the buffer contents at all on passing from/to netty.
>>> The zero-copy nivana.
>>>>>>
>>>>>> I know you state to have separate buffer pools. But if we could use
>>> the same memory address we can avoid the copy, reducing latency also. This
>>> could be done by sharing the buffer and the pool, or by using
>>> slice/duplicate retained.
>>>>>>
>>>>>> Cheers
>>>>>> Mike
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]>
>>> wrote:
>>>>>>>
>>>>>>> One thing I couldn't do before without some proper thinking was to use
>>>>>>> a Pooled Buffer on the message bodies.
>>>>>>>
>>>>>>> It would actually rock out the perf numbers if that could be
>>> achieved...
>>>>>>>
>>>>>>>
>>>>>>> I'm thinking this should be done on the server only. Doing it on the
>>>>>>> client would mean to give some API to users to tell when the message
>>>>>>> is gone and no longer needed.. I don't think we can do this with JMS
>>>>>>> core, or any of the qpid clients... although we could think about an
>>>>>>> API in the future for such thing.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> For the server: I would need to capture when the message is released..
>>>>>>> the only pitfal for this would be paging as the Page read may come and
>>>>>>> go... So, this will involve some work on making sure we would call the
>>>>>>> proper places.
>>>>>>>
>>>>>>>
>>>>>>> We would still need to copy from Netty Buffer into another
>>>>>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>>>>>>> while the message a regular Buffer (non Native).
>>>>>>>
>>>>>>>
>>>>>>> I am thinking of investing my time on this (even if my spare time if
>>>>>>> needed be) after apache con next week.
>>>>>>>
>>>>>>>
>>>>>>> This will certainly attract Francesco and Michael Pierce's attention..
>>>>>>> but this would be a pretty good improvement towards even less GC
>>>>>>> pressure.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Clebert Suconic
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Clebert Suconic
>>>
>>>
>
>
>
> --
> Clebert Suconic

Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
@Martyn: you recently added some configuration on InVM to make pooled
or Not.. where is that? Where is the pool right now after your
changes?

I can read the code. but it's easier to ask... :) Perhaps we should
make a class with a PoolServer for such things?


Like, I'm looking into perhaps add a ClientTransaction retry, and i
would use the pool there as well. it would be best to have such class
somewhere.

On Fri, May 26, 2017 at 11:22 AM, Matt Pavlovich <[hidden email]> wrote:

> +1 having the memory pool/allocator be a configurable strategy or policy-type deal would be bonus level 12. Esp. for embedded / kiosk / Raspberry Pi and linux host container scenarios as Martyn mentioned.
>
>> On May 26, 2017, at 9:50 AM, Clebert Suconic <[hidden email]> wrote:
>>
>> Perhaps we need a place to set the allocator.. Pooled versus Unpooled..
>>
>>
>> PooledRepository.getPool()...
>>
>>
>>
>> Regarding the ref counts.. we will need to add a new reference
>> counting.. the current one is a bit complex to be used because of
>> delivering.. DLQs.. etc... it's a big challenge for sure!
>>
>> On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <[hidden email]> wrote:
>>> We've had using buffer pools throughout on the backlog for a long time, so
>>> +1 on this.  The only thing I'd say here is that retrofitting the reference
>>> counting (i.e. releasing the buffers) can sometimes lead to leaks, if we
>>> don't catch all cases, so we just need to be careful here.
>>>
>>> One other thing to consider, we do have users that run Artemis in
>>> constrained environments, where memory is limited.  Allocating a chunk of
>>> memory upfront for the buffers may not be ideal for that use case.
>>>
>>> Cheers
>>>
>>> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]> wrote:
>>>
>>>> +1 this all sounds great
>>>>
>>>>> On May 12, 2017, at 12:02 PM, Michael André Pearce <
>>>> [hidden email]> wrote:
>>>>>
>>>>> I agree iterative targeted steps is best.
>>>>>
>>>>> So if even just pooling messages and keep the copying of the buffer as
>>>> today it's a step in the right direction.
>>>>>
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>>> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]>
>>>> wrote:
>>>>>>
>>>>>> I'm not sure we can keep the message body as a native buffer...
>>>>>>
>>>>>> I have seen it being expensive. Especially when dealing with
>>>>>> clustering and paging.. a lot of times I have seen memory exaustion...
>>>>>>
>>>>>> for AMQP, on qpid Proton though.. that would require a lot more
>>>>>> changes.. it's not even possible to think about it now  unless we make
>>>>>> substantial changes to Proton.. Proton likes to keep its own internal
>>>>>> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
>>>>>> would like to though).
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> But I'm always in advocating of tackling one thing at the time...
>>>>>> first thing is to have some reference counting in place to tell us
>>>>>> when to deallocate the memory used by the message, in such way it
>>>>>> works with both paging and non paging... anything else then will be
>>>>>> "relatively' easier.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
>>>>>> <[hidden email]> wrote:
>>>>>>>
>>>>>>> Hi Clebert.
>>>>>>>
>>>>>>> +1 from me definitely.
>>>>>>>
>>>>>>> Agreed this def should target the server not the clients.
>>>>>>>
>>>>>>> Having the message / buffer used my message pooled would be great, as
>>>> will reduce GC pressure.
>>>>>>>
>>>>>>> I would like to take that one step further and question if we could
>>>> actually avoid copying the buffer contents at all on passing from/to netty.
>>>> The zero-copy nivana.
>>>>>>>
>>>>>>> I know you state to have separate buffer pools. But if we could use
>>>> the same memory address we can avoid the copy, reducing latency also. This
>>>> could be done by sharing the buffer and the pool, or by using
>>>> slice/duplicate retained.
>>>>>>>
>>>>>>> Cheers
>>>>>>> Mike
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]>
>>>> wrote:
>>>>>>>>
>>>>>>>> One thing I couldn't do before without some proper thinking was to use
>>>>>>>> a Pooled Buffer on the message bodies.
>>>>>>>>
>>>>>>>> It would actually rock out the perf numbers if that could be
>>>> achieved...
>>>>>>>>
>>>>>>>>
>>>>>>>> I'm thinking this should be done on the server only. Doing it on the
>>>>>>>> client would mean to give some API to users to tell when the message
>>>>>>>> is gone and no longer needed.. I don't think we can do this with JMS
>>>>>>>> core, or any of the qpid clients... although we could think about an
>>>>>>>> API in the future for such thing.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> For the server: I would need to capture when the message is released..
>>>>>>>> the only pitfal for this would be paging as the Page read may come and
>>>>>>>> go... So, this will involve some work on making sure we would call the
>>>>>>>> proper places.
>>>>>>>>
>>>>>>>>
>>>>>>>> We would still need to copy from Netty Buffer into another
>>>>>>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>>>>>>>> while the message a regular Buffer (non Native).
>>>>>>>>
>>>>>>>>
>>>>>>>> I am thinking of investing my time on this (even if my spare time if
>>>>>>>> needed be) after apache con next week.
>>>>>>>>
>>>>>>>>
>>>>>>>> This will certainly attract Francesco and Michael Pierce's attention..
>>>>>>>> but this would be a pretty good improvement towards even less GC
>>>>>>>> pressure.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Clebert Suconic
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Clebert Suconic
>>>>
>>>>
>>
>>
>>
>> --
>> Clebert Suconic
>



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

Martyn Taylor
@Clebert It's been added as a configuration option on the InVM
 acceptor/connector.  Take a look at those classes.

On Fri, May 26, 2017 at 4:26 PM, Clebert Suconic <[hidden email]>
wrote:

> @Martyn: you recently added some configuration on InVM to make pooled
> or Not.. where is that? Where is the pool right now after your
> changes?
>
> I can read the code. but it's easier to ask... :) Perhaps we should
> make a class with a PoolServer for such things?
>
>
> Like, I'm looking into perhaps add a ClientTransaction retry, and i
> would use the pool there as well. it would be best to have such class
> somewhere.
>
> On Fri, May 26, 2017 at 11:22 AM, Matt Pavlovich <[hidden email]>
> wrote:
> > +1 having the memory pool/allocator be a configurable strategy or
> policy-type deal would be bonus level 12. Esp. for embedded / kiosk /
> Raspberry Pi and linux host container scenarios as Martyn mentioned.
> >
> >> On May 26, 2017, at 9:50 AM, Clebert Suconic <[hidden email]>
> wrote:
> >>
> >> Perhaps we need a place to set the allocator.. Pooled versus Unpooled..
> >>
> >>
> >> PooledRepository.getPool()...
> >>
> >>
> >>
> >> Regarding the ref counts.. we will need to add a new reference
> >> counting.. the current one is a bit complex to be used because of
> >> delivering.. DLQs.. etc... it's a big challenge for sure!
> >>
> >> On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <[hidden email]>
> wrote:
> >>> We've had using buffer pools throughout on the backlog for a long
> time, so
> >>> +1 on this.  The only thing I'd say here is that retrofitting the
> reference
> >>> counting (i.e. releasing the buffers) can sometimes lead to leaks, if
> we
> >>> don't catch all cases, so we just need to be careful here.
> >>>
> >>> One other thing to consider, we do have users that run Artemis in
> >>> constrained environments, where memory is limited.  Allocating a chunk
> of
> >>> memory upfront for the buffers may not be ideal for that use case.
> >>>
> >>> Cheers
> >>>
> >>> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]>
> wrote:
> >>>
> >>>> +1 this all sounds great
> >>>>
> >>>>> On May 12, 2017, at 12:02 PM, Michael André Pearce <
> >>>> [hidden email]> wrote:
> >>>>>
> >>>>> I agree iterative targeted steps is best.
> >>>>>
> >>>>> So if even just pooling messages and keep the copying of the buffer
> as
> >>>> today it's a step in the right direction.
> >>>>>
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>>> On 12 May 2017, at 15:52, Clebert Suconic <
> [hidden email]>
> >>>> wrote:
> >>>>>>
> >>>>>> I'm not sure we can keep the message body as a native buffer...
> >>>>>>
> >>>>>> I have seen it being expensive. Especially when dealing with
> >>>>>> clustering and paging.. a lot of times I have seen memory
> exaustion...
> >>>>>>
> >>>>>> for AMQP, on qpid Proton though.. that would require a lot more
> >>>>>> changes.. it's not even possible to think about it now  unless we
> make
> >>>>>> substantial changes to Proton.. Proton likes to keep its own
> internal
> >>>>>> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
> >>>>>> would like to though).
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> But I'm always in advocating of tackling one thing at the time...
> >>>>>> first thing is to have some reference counting in place to tell us
> >>>>>> when to deallocate the memory used by the message, in such way it
> >>>>>> works with both paging and non paging... anything else then will be
> >>>>>> "relatively' easier.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
> >>>>>> <[hidden email]> wrote:
> >>>>>>>
> >>>>>>> Hi Clebert.
> >>>>>>>
> >>>>>>> +1 from me definitely.
> >>>>>>>
> >>>>>>> Agreed this def should target the server not the clients.
> >>>>>>>
> >>>>>>> Having the message / buffer used my message pooled would be great,
> as
> >>>> will reduce GC pressure.
> >>>>>>>
> >>>>>>> I would like to take that one step further and question if we could
> >>>> actually avoid copying the buffer contents at all on passing from/to
> netty.
> >>>> The zero-copy nivana.
> >>>>>>>
> >>>>>>> I know you state to have separate buffer pools. But if we could use
> >>>> the same memory address we can avoid the copy, reducing latency also.
> This
> >>>> could be done by sharing the buffer and the pool, or by using
> >>>> slice/duplicate retained.
> >>>>>>>
> >>>>>>> Cheers
> >>>>>>> Mike
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> On 11 May 2017, at 23:13, Clebert Suconic <
> [hidden email]>
> >>>> wrote:
> >>>>>>>>
> >>>>>>>> One thing I couldn't do before without some proper thinking was
> to use
> >>>>>>>> a Pooled Buffer on the message bodies.
> >>>>>>>>
> >>>>>>>> It would actually rock out the perf numbers if that could be
> >>>> achieved...
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I'm thinking this should be done on the server only. Doing it on
> the
> >>>>>>>> client would mean to give some API to users to tell when the
> message
> >>>>>>>> is gone and no longer needed.. I don't think we can do this with
> JMS
> >>>>>>>> core, or any of the qpid clients... although we could think about
> an
> >>>>>>>> API in the future for such thing.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> For the server: I would need to capture when the message is
> released..
> >>>>>>>> the only pitfal for this would be paging as the Page read may
> come and
> >>>>>>>> go... So, this will involve some work on making sure we would
> call the
> >>>>>>>> proper places.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> We would still need to copy from Netty Buffer into another
> >>>>>>>> PooledBuffer as the Netty buffer would need to be a Native buffer
> >>>>>>>> while the message a regular Buffer (non Native).
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I am thinking of investing my time on this (even if my spare time
> if
> >>>>>>>> needed be) after apache con next week.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> This will certainly attract Francesco and Michael Pierce's
> attention..
> >>>>>>>> but this would be a pretty good improvement towards even less GC
> >>>>>>>> pressure.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> Clebert Suconic
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Clebert Suconic
> >>>>
> >>>>
> >>
> >>
> >>
> >> --
> >> Clebert Suconic
> >
>
>
>
> --
> Clebert Suconic
>
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
This could be an easy fix... a System property... on ActiveMQBuffers:

https://gist.github.com/clebertsuconic/75128b8b3f788d7a9d3b213224b5be39



if disabled, it would mean.. disabled for good... which is the
intention on small devices.

On Fri, May 26, 2017 at 11:36 AM, Martyn Taylor <[hidden email]> wrote:

> @Clebert It's been added as a configuration option on the InVM
>  acceptor/connector.  Take a look at those classes.
>
> On Fri, May 26, 2017 at 4:26 PM, Clebert Suconic <[hidden email]>
> wrote:
>
>> @Martyn: you recently added some configuration on InVM to make pooled
>> or Not.. where is that? Where is the pool right now after your
>> changes?
>>
>> I can read the code. but it's easier to ask... :) Perhaps we should
>> make a class with a PoolServer for such things?
>>
>>
>> Like, I'm looking into perhaps add a ClientTransaction retry, and i
>> would use the pool there as well. it would be best to have such class
>> somewhere.
>>
>> On Fri, May 26, 2017 at 11:22 AM, Matt Pavlovich <[hidden email]>
>> wrote:
>> > +1 having the memory pool/allocator be a configurable strategy or
>> policy-type deal would be bonus level 12. Esp. for embedded / kiosk /
>> Raspberry Pi and linux host container scenarios as Martyn mentioned.
>> >
>> >> On May 26, 2017, at 9:50 AM, Clebert Suconic <[hidden email]>
>> wrote:
>> >>
>> >> Perhaps we need a place to set the allocator.. Pooled versus Unpooled..
>> >>
>> >>
>> >> PooledRepository.getPool()...
>> >>
>> >>
>> >>
>> >> Regarding the ref counts.. we will need to add a new reference
>> >> counting.. the current one is a bit complex to be used because of
>> >> delivering.. DLQs.. etc... it's a big challenge for sure!
>> >>
>> >> On Fri, May 26, 2017 at 4:04 AM, Martyn Taylor <[hidden email]>
>> wrote:
>> >>> We've had using buffer pools throughout on the backlog for a long
>> time, so
>> >>> +1 on this.  The only thing I'd say here is that retrofitting the
>> reference
>> >>> counting (i.e. releasing the buffers) can sometimes lead to leaks, if
>> we
>> >>> don't catch all cases, so we just need to be careful here.
>> >>>
>> >>> One other thing to consider, we do have users that run Artemis in
>> >>> constrained environments, where memory is limited.  Allocating a chunk
>> of
>> >>> memory upfront for the buffers may not be ideal for that use case.
>> >>>
>> >>> Cheers
>> >>>
>> >>> On Thu, May 25, 2017 at 5:53 PM, Matt Pavlovich <[hidden email]>
>> wrote:
>> >>>
>> >>>> +1 this all sounds great
>> >>>>
>> >>>>> On May 12, 2017, at 12:02 PM, Michael André Pearce <
>> >>>> [hidden email]> wrote:
>> >>>>>
>> >>>>> I agree iterative targeted steps is best.
>> >>>>>
>> >>>>> So if even just pooling messages and keep the copying of the buffer
>> as
>> >>>> today it's a step in the right direction.
>> >>>>>
>> >>>>>
>> >>>>> Sent from my iPhone
>> >>>>>
>> >>>>>> On 12 May 2017, at 15:52, Clebert Suconic <
>> [hidden email]>
>> >>>> wrote:
>> >>>>>>
>> >>>>>> I'm not sure we can keep the message body as a native buffer...
>> >>>>>>
>> >>>>>> I have seen it being expensive. Especially when dealing with
>> >>>>>> clustering and paging.. a lot of times I have seen memory
>> exaustion...
>> >>>>>>
>> >>>>>> for AMQP, on qpid Proton though.. that would require a lot more
>> >>>>>> changes.. it's not even possible to think about it now  unless we
>> make
>> >>>>>> substantial changes to Proton.. Proton likes to keep its own
>> internal
>> >>>>>> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
>> >>>>>> would like to though).
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> But I'm always in advocating of tackling one thing at the time...
>> >>>>>> first thing is to have some reference counting in place to tell us
>> >>>>>> when to deallocate the memory used by the message, in such way it
>> >>>>>> works with both paging and non paging... anything else then will be
>> >>>>>> "relatively' easier.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
>> >>>>>> <[hidden email]> wrote:
>> >>>>>>>
>> >>>>>>> Hi Clebert.
>> >>>>>>>
>> >>>>>>> +1 from me definitely.
>> >>>>>>>
>> >>>>>>> Agreed this def should target the server not the clients.
>> >>>>>>>
>> >>>>>>> Having the message / buffer used my message pooled would be great,
>> as
>> >>>> will reduce GC pressure.
>> >>>>>>>
>> >>>>>>> I would like to take that one step further and question if we could
>> >>>> actually avoid copying the buffer contents at all on passing from/to
>> netty.
>> >>>> The zero-copy nivana.
>> >>>>>>>
>> >>>>>>> I know you state to have separate buffer pools. But if we could use
>> >>>> the same memory address we can avoid the copy, reducing latency also.
>> This
>> >>>> could be done by sharing the buffer and the pool, or by using
>> >>>> slice/duplicate retained.
>> >>>>>>>
>> >>>>>>> Cheers
>> >>>>>>> Mike
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>> On 11 May 2017, at 23:13, Clebert Suconic <
>> [hidden email]>
>> >>>> wrote:
>> >>>>>>>>
>> >>>>>>>> One thing I couldn't do before without some proper thinking was
>> to use
>> >>>>>>>> a Pooled Buffer on the message bodies.
>> >>>>>>>>
>> >>>>>>>> It would actually rock out the perf numbers if that could be
>> >>>> achieved...
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> I'm thinking this should be done on the server only. Doing it on
>> the
>> >>>>>>>> client would mean to give some API to users to tell when the
>> message
>> >>>>>>>> is gone and no longer needed.. I don't think we can do this with
>> JMS
>> >>>>>>>> core, or any of the qpid clients... although we could think about
>> an
>> >>>>>>>> API in the future for such thing.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> For the server: I would need to capture when the message is
>> released..
>> >>>>>>>> the only pitfal for this would be paging as the Page read may
>> come and
>> >>>>>>>> go... So, this will involve some work on making sure we would
>> call the
>> >>>>>>>> proper places.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> We would still need to copy from Netty Buffer into another
>> >>>>>>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>> >>>>>>>> while the message a regular Buffer (non Native).
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> I am thinking of investing my time on this (even if my spare time
>> if
>> >>>>>>>> needed be) after apache con next week.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> This will certainly attract Francesco and Michael Pierce's
>> attention..
>> >>>>>>>> but this would be a pretty good improvement towards even less GC
>> >>>>>>>> pressure.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> --
>> >>>>>>>> Clebert Suconic
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>> Clebert Suconic
>> >>>>
>> >>>>
>> >>
>> >>
>> >>
>> >> --
>> >> Clebert Suconic
>> >
>>
>>
>>
>> --
>> Clebert Suconic
>>



--
Clebert Suconic