[DISCUSS] Separate project 4 (micro)benchmarks

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

[DISCUSS] Separate project 4 (micro)benchmarks

nigro_franz
Hi guys!

In the last months I've produced a lot of different tests and microbenchmarks (using JMH to do not get cheated by the JVM) to measure in isolation the Journal performances, garbage production of the Artemis's queues, encoding/decoding latencies, Netty's connector scaling...and I've noticed how it was really hard to find out the proper place where to put them without creating chaos and leaving on the Artemis project's tree a lot of "hybrid" module that mix what is necessary for the broker to work and what is something that exist solely to measure how much is performant in any of its parts...
I suspect that the more Artemis will grow, the more there will be the need to have a well-structured (for the purpose) standalone project that uses and refers any single parts of Artemis that need to be benchmarked in isolation.
This will improve the Artemis quality codebase; finally free from ever-growing code (& dependencies) that doesn't provide or test anything and the quality of the benchmarks too, that wouldn't be executed anymore "by hand" (producing unreliable results!) but with proper tools and environment.
What do you think?

P.S: I vote for JMH!!!!
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Separate project 4 (micro)benchmarks

clebertsuconic
It sounds this would be a better fit as a directory within Artemis.
I'm not sure it needs to be part of the main pom..


depending on how it grows we may find another place, but we could
start it there first, and if it later it becomes bigger we can pull it
out.

For example we have the examples tree... this could be under
./examples/benchmarks... or at the same level as examples.

WDYT?



On Wed, Feb 8, 2017 at 6:45 AM, nigro_franz <[hidden email]> wrote:

> Hi guys!
>
> In the last months I've produced a lot of different tests and
> microbenchmarks (using  JMH
> <http://openjdk.java.net/projects/code-tools/jmh/>   to do not get cheated
> by the JVM) to measure in isolation the Journal performances, garbage
> production of the Artemis's queues, encoding/decoding latencies, Netty's
> connector scaling...and I've noticed how it was really hard to find out the
> proper place where to put them without creating chaos and leaving on the
> Artemis project's tree a lot of "hybrid" module that mix what is necessary
> for the broker to work and what is something that exist solely to measure
> how much is performant in any of its parts...
> I suspect that the more Artemis will grow, the more there will be the need
> to have a well-structured (for the purpose) standalone project that uses and
> refers any single parts of Artemis that need to be benchmarked in isolation.
> This will improve the Artemis quality codebase; finally free from
> ever-growing code (& dependencies) that doesn't provide or test anything and
> the quality of the benchmarks too, that wouldn't be executed anymore "by
> hand" (producing unreliable results!) but with proper tools and environment.
> What do you think?
>
> P.S: I vote for JMH!!!!
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Separate-project-4-micro-benchmarks-tp4721750.html
> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.



--
Clebert Suconic
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Separate project 4 (micro)benchmarks

nigro_franz

IMHO the most important part is to not have the benchmarks mixed with tests and other "similar" things, then It could be in a separated project or a separate module/forlder, it will be effective anyway...
But I'm feeling doubtful about using "examples", but It would be great to be in the same level of it!
I could take care to port the other already existing benchmarks around the project sources (tests for the most part) under the new <need to be named properly but benchmarks seems good> folder and maybe evaluate after opening a thread on the mailing list, what is effective as it is and what could be dropped or translated with proper tools (JMH!!!!!!!)...makes sense?
Reply | Threaded
Open this post in threaded view
|

Re: [DISCUSS] Separate project 4 (micro)benchmarks

nigro_franz
If everyone is ok with it, I'll start to port the existing (micro)benchmarks in a separate folder near examples, using JMH..wdyt?