Quantcast

[DISCUSS] Use pooled buffers on message body

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[DISCUSS] Use pooled buffers on message body

clebertsuconic
One thing I couldn't do before without some proper thinking was to use
a Pooled Buffer on the message bodies.

It would actually rock out the perf numbers if that could be achieved...


I'm thinking this should be done on the server only. Doing it on the
client would mean to give some API to users to tell when the message
is gone and no longer needed.. I don't think we can do this with JMS
core, or any of the qpid clients... although we could think about an
API in the future for such thing.



For the server: I would need to capture when the message is released..
the only pitfal for this would be paging as the Page read may come and
go... So, this will involve some work on making sure we would call the
proper places.


We would still need to copy from Netty Buffer into another
PooledBuffer as the Netty buffer would need to be a Native buffer
while the message a regular Buffer (non Native).


I am thinking of investing my time on this (even if my spare time if
needed be) after apache con next week.


This will certainly attract Francesco and Michael Pierce's attention..
but this would be a pretty good improvement towards even less GC
pressure.





--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [DISCUSS] Use pooled buffers on message body

Michael André Pearce

Hi Clebert.

+1 from me definitely.

Agreed this def should target the server not the clients.

Having the message / buffer used my message pooled would be great, as will reduce GC pressure.

I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.

I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.

Cheers
Mike



> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>
> One thing I couldn't do before without some proper thinking was to use
> a Pooled Buffer on the message bodies.
>
> It would actually rock out the perf numbers if that could be achieved...
>
>
> I'm thinking this should be done on the server only. Doing it on the
> client would mean to give some API to users to tell when the message
> is gone and no longer needed.. I don't think we can do this with JMS
> core, or any of the qpid clients... although we could think about an
> API in the future for such thing.
>
>
>
> For the server: I would need to capture when the message is released..
> the only pitfal for this would be paging as the Page read may come and
> go... So, this will involve some work on making sure we would call the
> proper places.
>
>
> We would still need to copy from Netty Buffer into another
> PooledBuffer as the Netty buffer would need to be a Native buffer
> while the message a regular Buffer (non Native).
>
>
> I am thinking of investing my time on this (even if my spare time if
> needed be) after apache con next week.
>
>
> This will certainly attract Francesco and Michael Pierce's attention..
> but this would be a pretty good improvement towards even less GC
> pressure.
>
>
>
>
>
> --
> Clebert Suconic
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [DISCUSS] Use pooled buffers on message body

nigro_franz
In reply to this post by clebertsuconic
HI Clebert!!
+1 from me too!!
Agree with all the things stated by Michael too about reducing latencies.
This is something that worth to be addressed and I'll give my contribute on it with pleasure :)

Franz
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [DISCUSS] Use pooled buffers on message body

clebertsuconic
In reply to this post by Michael André Pearce
I'm not sure we can keep the message body as a native buffer...

I have seen it being expensive. Especially when dealing with
clustering and paging.. a lot of times I have seen memory exaustion...

for AMQP, on qpid Proton though.. that would require a lot more
changes.. it's not even possible to think about it now  unless we make
substantial changes to Proton.. Proton likes to keep its own internal
pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
would like to though).




But I'm always in advocating of tackling one thing at the time...
first thing is to have some reference counting in place to tell us
when to deallocate the memory used by the message, in such way it
works with both paging and non paging... anything else then will be
"relatively' easier.



On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
<[hidden email]> wrote:

>
> Hi Clebert.
>
> +1 from me definitely.
>
> Agreed this def should target the server not the clients.
>
> Having the message / buffer used my message pooled would be great, as will reduce GC pressure.
>
> I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.
>
> I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.
>
> Cheers
> Mike
>
>
>
>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>>
>> One thing I couldn't do before without some proper thinking was to use
>> a Pooled Buffer on the message bodies.
>>
>> It would actually rock out the perf numbers if that could be achieved...
>>
>>
>> I'm thinking this should be done on the server only. Doing it on the
>> client would mean to give some API to users to tell when the message
>> is gone and no longer needed.. I don't think we can do this with JMS
>> core, or any of the qpid clients... although we could think about an
>> API in the future for such thing.
>>
>>
>>
>> For the server: I would need to capture when the message is released..
>> the only pitfal for this would be paging as the Page read may come and
>> go... So, this will involve some work on making sure we would call the
>> proper places.
>>
>>
>> We would still need to copy from Netty Buffer into another
>> PooledBuffer as the Netty buffer would need to be a Native buffer
>> while the message a regular Buffer (non Native).
>>
>>
>> I am thinking of investing my time on this (even if my spare time if
>> needed be) after apache con next week.
>>
>>
>> This will certainly attract Francesco and Michael Pierce's attention..
>> but this would be a pretty good improvement towards even less GC
>> pressure.
>>
>>
>>
>>
>>
>> --
>> Clebert Suconic



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [DISCUSS] Use pooled buffers on message body

Michael André Pearce
I agree iterative targeted steps is best.

So if even just pooling messages and keep the copying of the buffer as today it's a step in the right direction.


Sent from my iPhone

> On 12 May 2017, at 15:52, Clebert Suconic <[hidden email]> wrote:
>
> I'm not sure we can keep the message body as a native buffer...
>
> I have seen it being expensive. Especially when dealing with
> clustering and paging.. a lot of times I have seen memory exaustion...
>
> for AMQP, on qpid Proton though.. that would require a lot more
> changes.. it's not even possible to think about it now  unless we make
> substantial changes to Proton.. Proton likes to keep its own internal
> pool and make a lot of copies.. so we cannot do this yet on AMQP. (I
> would like to though).
>
>
>
>
> But I'm always in advocating of tackling one thing at the time...
> first thing is to have some reference counting in place to tell us
> when to deallocate the memory used by the message, in such way it
> works with both paging and non paging... anything else then will be
> "relatively' easier.
>
>
>
> On Fri, May 12, 2017 at 2:56 AM, Michael André Pearce
> <[hidden email]> wrote:
>>
>> Hi Clebert.
>>
>> +1 from me definitely.
>>
>> Agreed this def should target the server not the clients.
>>
>> Having the message / buffer used my message pooled would be great, as will reduce GC pressure.
>>
>> I would like to take that one step further and question if we could actually avoid copying the buffer contents at all on passing from/to netty. The zero-copy nivana.
>>
>> I know you state to have separate buffer pools. But if we could use the same memory address we can avoid the copy, reducing latency also. This could be done by sharing the buffer and the pool, or by using slice/duplicate retained.
>>
>> Cheers
>> Mike
>>
>>
>>
>>> On 11 May 2017, at 23:13, Clebert Suconic <[hidden email]> wrote:
>>>
>>> One thing I couldn't do before without some proper thinking was to use
>>> a Pooled Buffer on the message bodies.
>>>
>>> It would actually rock out the perf numbers if that could be achieved...
>>>
>>>
>>> I'm thinking this should be done on the server only. Doing it on the
>>> client would mean to give some API to users to tell when the message
>>> is gone and no longer needed.. I don't think we can do this with JMS
>>> core, or any of the qpid clients... although we could think about an
>>> API in the future for such thing.
>>>
>>>
>>>
>>> For the server: I would need to capture when the message is released..
>>> the only pitfal for this would be paging as the Page read may come and
>>> go... So, this will involve some work on making sure we would call the
>>> proper places.
>>>
>>>
>>> We would still need to copy from Netty Buffer into another
>>> PooledBuffer as the Netty buffer would need to be a Native buffer
>>> while the message a regular Buffer (non Native).
>>>
>>>
>>> I am thinking of investing my time on this (even if my spare time if
>>> needed be) after apache con next week.
>>>
>>>
>>> This will certainly attract Francesco and Michael Pierce's attention..
>>> but this would be a pretty good improvement towards even less GC
>>> pressure.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Clebert Suconic
>
>
>
> --
> Clebert Suconic
Loading...