Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Kevin Burton
We have a problem whereby we have a LARGE number of workers.  Right now
about 50k worker threads on about 45 bare metal boxes.

We have about 10 ActiveMQ servers / daemons which service these workers.

The problem is that my current design has a session per queue server per
thread.   So this means I have about 500k sessions each trying to prefetch
1 message at a time.

Since my tasks can take about 30 seconds on average to execute, this means
that it takes 5 minutes for a message to be processed.

That's a BIG problem in that I want to keep my latencies low!

And the BIG downside here is that a lot of my workers get their prefetch
buffer filled first, starving out other workers that do nothing...

This leads to massive starvation where some of my boxes are at 100% CPU and
others are at 10-20% starved for work.

So I'm working on a new design where by I use a listener, then I allow it
to prefetch and I use a countdown latch from within the message listener to
wait for the thread to process the message.  Then I commit the message.

This solves the over-prefetch problem because we don't attempt to pre-fetch
until the message is processed.

Since I can't commit each JMS message one at a time, I'm only left with
options that commit the whole session.  This forces me to set prefetch=1
otherwise I could commit() and then commit a message that is actually still
being processed.

This leaves me with a situation where I need to be clever about how I fetch
from the queue servers.

If I prefetch on ALL queue servers I'm kind of back to where I was to begin
with.

I was thinking of implementing this solution which should work and
minimizes all downsides.  Wanted feedback on this issue.

If I have say 1000 worker threads, what I do is allow up to 10% of the nr
of worker threads to be pre-fetched and stored in a local queue
(ArrayBlockingQueue).

In this example this would be 100 messages.

The problem now is how to we read in parallel from each server.

I think in this situation is that we then allow 10% of the buffered
messages from each queue server.

So in this case 10 from each.

so now we end up with a situation where we're allowed to prefetch 10
messages, each from each queue server, which can grow to hold 100 message.

The latency for processing this message would be the minimum average time
per task /thread being indexed which I think will keep the latencies low.

Also, I think this could be a common anti-pattern and solution to the
over-prefetch problem.

If you agree I'm willing to document the problem

Additionally, I think this comes close to the multi-headed ideal solution
according to queuing theory using multiple worker heads.  It just becomes
more interesting because we have imperfect
information from the queue servers so we have to make educated guesses
about their behavior.


--

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Tim Bain
Right off the top, can't you use INDIVIDUAL_ACK here, rather than
committing transactions?  That seems like the ideal mode to let you choose
which messages to ack without having to ack all the ones up to a certain
point.

The only complication is that I think your prefetch size would need to be
equal to (or greater than, but that's not ideal for load balancing) the
number of current consumers on the session, which could be complicated to
configure.  But you might be able to use a prefetch buffer size of 0 to
work around it; I'm not sure how that would interact with INDIVIDUAL_ACK,
since I've never tried using a prefetch size of 0, but it would be simple
enough for you to test.

If it works, a prefetch buffer size of 0 would be better than a size of 1
with AUTO_ACK, because there would be nothing prefetched to any client that
wasn't actively being processed, so new consumers wouldn't be starved by
the broker having already passed out the backlog to consumers who weren't
ready for their next message.

Also, I'm curious about how a 30-second message with a prefetch size of 1
results in a 5-minute latency; why isn't that 2 * 30 seconds = 1 minute?

Tim

On Mon, Oct 19, 2015 at 8:15 PM, Kevin Burton <[hidden email]> wrote:

> We have a problem whereby we have a LARGE number of workers.  Right now
> about 50k worker threads on about 45 bare metal boxes.
>
> We have about 10 ActiveMQ servers / daemons which service these workers.
>
> The problem is that my current design has a session per queue server per
> thread.   So this means I have about 500k sessions each trying to prefetch
> 1 message at a time.
>
> Since my tasks can take about 30 seconds on average to execute, this means
> that it takes 5 minutes for a message to be processed.
>
> That's a BIG problem in that I want to keep my latencies low!
>
> And the BIG downside here is that a lot of my workers get their prefetch
> buffer filled first, starving out other workers that do nothing...
>
> This leads to massive starvation where some of my boxes are at 100% CPU and
> others are at 10-20% starved for work.
>
> So I'm working on a new design where by I use a listener, then I allow it
> to prefetch and I use a countdown latch from within the message listener to
> wait for the thread to process the message.  Then I commit the message.
>
> This solves the over-prefetch problem because we don't attempt to pre-fetch
> until the message is processed.
>
> Since I can't commit each JMS message one at a time, I'm only left with
> options that commit the whole session.  This forces me to set prefetch=1
> otherwise I could commit() and then commit a message that is actually still
> being processed.
>
> This leaves me with a situation where I need to be clever about how I fetch
> from the queue servers.
>
> If I prefetch on ALL queue servers I'm kind of back to where I was to begin
> with.
>
> I was thinking of implementing this solution which should work and
> minimizes all downsides.  Wanted feedback on this issue.
>
> If I have say 1000 worker threads, what I do is allow up to 10% of the nr
> of worker threads to be pre-fetched and stored in a local queue
> (ArrayBlockingQueue).
>
> In this example this would be 100 messages.
>
> The problem now is how to we read in parallel from each server.
>
> I think in this situation is that we then allow 10% of the buffered
> messages from each queue server.
>
> So in this case 10 from each.
>
> so now we end up with a situation where we're allowed to prefetch 10
> messages, each from each queue server, which can grow to hold 100 message.
>
> The latency for processing this message would be the minimum average time
> per task /thread being indexed which I think will keep the latencies low.
>
> Also, I think this could be a common anti-pattern and solution to the
> over-prefetch problem.
>
> If you agree I'm willing to document the problem
>
> Additionally, I think this comes close to the multi-headed ideal solution
> according to queuing theory using multiple worker heads.  It just becomes
> more interesting because we have imperfect
> information from the queue servers so we have to make educated guesses
> about their behavior.
>
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Martin Lichtin
In reply to this post by Kevin Burton
Your problem sounds a bit more complex, but just wanted to mentioned that one can set usePrefetchExtension=”false”.
 From the docs:

The default behavior of a broker is to use delivery acknowledgements to determine the state of a consumer's prefetch buffer. For example, if a consumer's prefetch limit is configured as 1 the broker will dispatch 1 message to the consumer and when the consumer acknowledges receiving the message, the broker will dispatch a second message. If the initial message takes a long time to process, the message sitting in the prefetch buffer cannot be processed by a faster consumer.

If the behavior is causing issues, it can be changed such that the broker will wait for the consumer to acknowledge that the message is processed before refilling the prefetch buffer. This is accomplished by setting a destination policy on the broker to disable the prefetch extension for specific destinations.

- Martin


On 20.10.2015 04:15, Kevin Burton wrote:

> We have a problem whereby we have a LARGE number of workers.  Right now
> about 50k worker threads on about 45 bare metal boxes.
>
> We have about 10 ActiveMQ servers / daemons which service these workers.
>
> The problem is that my current design has a session per queue server per
> thread.   So this means I have about 500k sessions each trying to prefetch
> 1 message at a time.
>
> Since my tasks can take about 30 seconds on average to execute, this means
> that it takes 5 minutes for a message to be processed.
>
> That's a BIG problem in that I want to keep my latencies low!
>
> And the BIG downside here is that a lot of my workers get their prefetch
> buffer filled first, starving out other workers that do nothing...
>
> This leads to massive starvation where some of my boxes are at 100% CPU and
> others are at 10-20% starved for work.
>
> So I'm working on a new design where by I use a listener, then I allow it
> to prefetch and I use a countdown latch from within the message listener to
> wait for the thread to process the message.  Then I commit the message.
>
> This solves the over-prefetch problem because we don't attempt to pre-fetch
> until the message is processed.
>
> Since I can't commit each JMS message one at a time, I'm only left with
> options that commit the whole session.  This forces me to set prefetch=1
> otherwise I could commit() and then commit a message that is actually still
> being processed.
>
> This leaves me with a situation where I need to be clever about how I fetch
> from the queue servers.
>
> If I prefetch on ALL queue servers I'm kind of back to where I was to begin
> with.
>
> I was thinking of implementing this solution which should work and
> minimizes all downsides.  Wanted feedback on this issue.
>
> If I have say 1000 worker threads, what I do is allow up to 10% of the nr
> of worker threads to be pre-fetched and stored in a local queue
> (ArrayBlockingQueue).
>
> In this example this would be 100 messages.
>
> The problem now is how to we read in parallel from each server.
>
> I think in this situation is that we then allow 10% of the buffered
> messages from each queue server.
>
> So in this case 10 from each.
>
> so now we end up with a situation where we're allowed to prefetch 10
> messages, each from each queue server, which can grow to hold 100 message.
>
> The latency for processing this message would be the minimum average time
> per task /thread being indexed which I think will keep the latencies low.
>
> Also, I think this could be a common anti-pattern and solution to the
> over-prefetch problem.
>
> If you agree I'm willing to document the problem
>
> Additionally, I think this comes close to the multi-headed ideal solution
> according to queuing theory using multiple worker heads.  It just becomes
> more interesting because we have imperfect
> information from the queue servers so we have to make educated guesses
> about their behavior.
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Tim Bain
Isn't usePrefetchExtension=false the same as queuePrefetch=0 and
topicPrefetch=0 via policies?  I always thought they were just two ways
(and not the only two, because you can set it per-connection factory or per
destination) to do the same thing.  Or am I missing something here?
On Oct 22, 2015 1:59 PM, "Martin Lichtin" <[hidden email]> wrote:

> Your problem sounds a bit more complex, but just wanted to mentioned that
> one can set usePrefetchExtension=”false”.
> From the docs:
>
> The default behavior of a broker is to use delivery acknowledgements to
> determine the state of a consumer's prefetch buffer. For example, if a
> consumer's prefetch limit is configured as 1 the broker will dispatch 1
> message to the consumer and when the consumer acknowledges receiving the
> message, the broker will dispatch a second message. If the initial message
> takes a long time to process, the message sitting in the prefetch buffer
> cannot be processed by a faster consumer.
>
> If the behavior is causing issues, it can be changed such that the broker
> will wait for the consumer to acknowledge that the message is processed
> before refilling the prefetch buffer. This is accomplished by setting a
> destination policy on the broker to disable the prefetch extension for
> specific destinations.
>
> - Martin
>
>
> On 20.10.2015 04:15, Kevin Burton wrote:
>
>> We have a problem whereby we have a LARGE number of workers.  Right now
>> about 50k worker threads on about 45 bare metal boxes.
>>
>> We have about 10 ActiveMQ servers / daemons which service these workers.
>>
>> The problem is that my current design has a session per queue server per
>> thread.   So this means I have about 500k sessions each trying to prefetch
>> 1 message at a time.
>>
>> Since my tasks can take about 30 seconds on average to execute, this means
>> that it takes 5 minutes for a message to be processed.
>>
>> That's a BIG problem in that I want to keep my latencies low!
>>
>> And the BIG downside here is that a lot of my workers get their prefetch
>> buffer filled first, starving out other workers that do nothing...
>>
>> This leads to massive starvation where some of my boxes are at 100% CPU
>> and
>> others are at 10-20% starved for work.
>>
>> So I'm working on a new design where by I use a listener, then I allow it
>> to prefetch and I use a countdown latch from within the message listener
>> to
>> wait for the thread to process the message.  Then I commit the message.
>>
>> This solves the over-prefetch problem because we don't attempt to
>> pre-fetch
>> until the message is processed.
>>
>> Since I can't commit each JMS message one at a time, I'm only left with
>> options that commit the whole session.  This forces me to set prefetch=1
>> otherwise I could commit() and then commit a message that is actually
>> still
>> being processed.
>>
>> This leaves me with a situation where I need to be clever about how I
>> fetch
>> from the queue servers.
>>
>> If I prefetch on ALL queue servers I'm kind of back to where I was to
>> begin
>> with.
>>
>> I was thinking of implementing this solution which should work and
>> minimizes all downsides.  Wanted feedback on this issue.
>>
>> If I have say 1000 worker threads, what I do is allow up to 10% of the nr
>> of worker threads to be pre-fetched and stored in a local queue
>> (ArrayBlockingQueue).
>>
>> In this example this would be 100 messages.
>>
>> The problem now is how to we read in parallel from each server.
>>
>> I think in this situation is that we then allow 10% of the buffered
>> messages from each queue server.
>>
>> So in this case 10 from each.
>>
>> so now we end up with a situation where we're allowed to prefetch 10
>> messages, each from each queue server, which can grow to hold 100 message.
>>
>> The latency for processing this message would be the minimum average time
>> per task /thread being indexed which I think will keep the latencies low.
>>
>> Also, I think this could be a common anti-pattern and solution to the
>> over-prefetch problem.
>>
>> If you agree I'm willing to document the problem
>>
>> Additionally, I think this comes close to the multi-headed ideal solution
>> according to queuing theory using multiple worker heads.  It just becomes
>> more interesting because we have imperfect
>> information from the queue servers so we have to make educated guesses
>> about their behavior.
>>
>>
>>
>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Kevin Burton
In reply to this post by Tim Bain
sorry for the delay in reply.  was dealing with a family issue that I
needed to prioritize...

On Wed, Oct 21, 2015 at 6:52 AM, Tim Bain <[hidden email]> wrote:

> Right off the top, can't you use INDIVIDUAL_ACK here, rather than
> committing transactions?  That seems like the ideal mode to let you choose
> which messages to ack without having to ack all the ones up to a certain
> point.
>
>
I thought about that. We had moved to sessions to avoid over-indexing
because our tasks create more messages and this way I can bulk commit them
as one unit.

But maybe if I just deal with the "at least once" semantics while the
transactions aren't combined I'll just execute a message at least once.
But there might be a failure scenario where we execute the second message
hundreds of times where if it was a transaction this could be avoided.


>
> Also, I'm curious about how a 30-second message with a prefetch size of 1
> results in a 5-minute latency; why isn't that 2 * 30 seconds = 1 minute?
>
>
It's because I have one connection per thread per server.

So if we have 10 servers, each thread has ten sessions.  and if prefetch is
1 then that means I prefetch 10 total messages.  If each message takes 30
seconds to execute that thread will take a while to handle all ten.
This leads to significant latency.

I pushed some code last week to instrument this and our average latency
right now is 3-5 minutes between prefetching a message and servicing a
message.

Fortunately there's a timestamp added on prefetch so I can just take the
current time that I am executing the message/task and then subtract the
prefetch time to compute the latency.

Kevin

--

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Kevin Burton
In reply to this post by Martin Lichtin
Oh nice. I'll take a look at this.  This might be just what I need... it
adds complexity but better than refactoring my code if I can avoid it ;)

Kevin

On Thu, Oct 22, 2015 at 12:59 PM, Martin Lichtin <[hidden email]> wrote:

> Your problem sounds a bit more complex, but just wanted to mentioned that
> one can set usePrefetchExtension=”false”.
> From the docs:
>
> The default behavior of a broker is to use delivery acknowledgements to
> determine the state of a consumer's prefetch buffer. For example, if a
> consumer's prefetch limit is configured as 1 the broker will dispatch 1
> message to the consumer and when the consumer acknowledges receiving the
> message, the broker will dispatch a second message. If the initial message
> takes a long time to process, the message sitting in the prefetch buffer
> cannot be processed by a faster consumer.
>
> If the behavior is causing issues, it can be changed such that the broker
> will wait for the consumer to acknowledge that the message is processed
> before refilling the prefetch buffer. This is accomplished by setting a
> destination policy on the broker to disable the prefetch extension for
> specific destinations.
>
> - Martin
>
>
>
> On 20.10.2015 04:15, Kevin Burton wrote:
>
>> We have a problem whereby we have a LARGE number of workers.  Right now
>> about 50k worker threads on about 45 bare metal boxes.
>>
>> We have about 10 ActiveMQ servers / daemons which service these workers.
>>
>> The problem is that my current design has a session per queue server per
>> thread.   So this means I have about 500k sessions each trying to prefetch
>> 1 message at a time.
>>
>> Since my tasks can take about 30 seconds on average to execute, this means
>> that it takes 5 minutes for a message to be processed.
>>
>> That's a BIG problem in that I want to keep my latencies low!
>>
>> And the BIG downside here is that a lot of my workers get their prefetch
>> buffer filled first, starving out other workers that do nothing...
>>
>> This leads to massive starvation where some of my boxes are at 100% CPU
>> and
>> others are at 10-20% starved for work.
>>
>> So I'm working on a new design where by I use a listener, then I allow it
>> to prefetch and I use a countdown latch from within the message listener
>> to
>> wait for the thread to process the message.  Then I commit the message.
>>
>> This solves the over-prefetch problem because we don't attempt to
>> pre-fetch
>> until the message is processed.
>>
>> Since I can't commit each JMS message one at a time, I'm only left with
>> options that commit the whole session.  This forces me to set prefetch=1
>> otherwise I could commit() and then commit a message that is actually
>> still
>> being processed.
>>
>> This leaves me with a situation where I need to be clever about how I
>> fetch
>> from the queue servers.
>>
>> If I prefetch on ALL queue servers I'm kind of back to where I was to
>> begin
>> with.
>>
>> I was thinking of implementing this solution which should work and
>> minimizes all downsides.  Wanted feedback on this issue.
>>
>> If I have say 1000 worker threads, what I do is allow up to 10% of the nr
>> of worker threads to be pre-fetched and stored in a local queue
>> (ArrayBlockingQueue).
>>
>> In this example this would be 100 messages.
>>
>> The problem now is how to we read in parallel from each server.
>>
>> I think in this situation is that we then allow 10% of the buffered
>> messages from each queue server.
>>
>> So in this case 10 from each.
>>
>> so now we end up with a situation where we're allowed to prefetch 10
>> messages, each from each queue server, which can grow to hold 100 message.
>>
>> The latency for processing this message would be the minimum average time
>> per task /thread being indexed which I think will keep the latencies low.
>>
>> Also, I think this could be a common anti-pattern and solution to the
>> over-prefetch problem.
>>
>> If you agree I'm willing to document the problem
>>
>> Additionally, I think this comes close to the multi-headed ideal solution
>> according to queuing theory using multiple worker heads.  It just becomes
>> more interesting because we have imperfect
>> information from the queue servers so we have to make educated guesses
>> about their behavior.
>>
>>
>>
>


--

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Tim Bain
In reply to this post by Kevin Burton
On Fri, Oct 30, 2015 at 6:29 PM, Kevin Burton <[hidden email]> wrote:

> sorry for the delay in reply.  was dealing with a family issue that I
> needed to prioritize...
>
> On Wed, Oct 21, 2015 at 6:52 AM, Tim Bain <[hidden email]> wrote:
>
> > Right off the top, can't you use INDIVIDUAL_ACK here, rather than
> > committing transactions?  That seems like the ideal mode to let you
> choose
> > which messages to ack without having to ack all the ones up to a certain
> > point.
> >
> >
> I thought about that. We had moved to sessions to avoid over-indexing
> because our tasks create more messages and this way I can bulk commit them
> as one unit.
>
> But maybe if I just deal with the "at least once" semantics while the
> transactions aren't combined I'll just execute a message at least once.
> But there might be a failure scenario where we execute the second message
> hundreds of times where if it was a transaction this could be avoided.
>

I think there's a limit to how many redelivery attempts you're willing to
take before to send the message to the DLQ, which I think would cover most
scenarios when that would happen in the wild.  (You could always construct
an arbitrarily bad failure case, but the odds of actually seeing it in the
real world get vanishingly small as it gets uglier.)


> > Also, I'm curious about how a 30-second message with a prefetch size of 1
> > results in a 5-minute latency; why isn't that 2 * 30 seconds = 1 minute?
> >
> >
> It's because I have one connection per thread per server.
>
> So if we have 10 servers, each thread has ten sessions.  and if prefetch is
> 1 then that means I prefetch 10 total messages.  If each message takes 30
> seconds to execute that thread will take a while to handle all ten.
> This leads to significant latency.
>

If I'm understanding correctly, you've got a single client consuming one
message at a time while consuming from N brokers that are presumably not
networked (otherwise why would you connect to more than one of them)?
Why?  (Among other things, why not just network the brokers and simplify
your use-case?)


> I pushed some code last week to instrument this and our average latency
> right now is 3-5 minutes between prefetching a message and servicing a
> message.
>
> Fortunately there's a timestamp added on prefetch so I can just take the
> current time that I am executing the message/task and then subtract the
> prefetch time to compute the latency.
>
> Kevin
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>
Reply | Threaded
Open this post in threaded view
|

Re: Dealing with the "over-prefetch" problem with large numbers of workers and many queue servers

Kevin Burton
>
>
>
> I think there's a limit to how many redelivery attempts you're willing to
> take before to send the message to the DLQ, which I think would cover most
> scenarios when that would happen in the wild.  (You could always construct
> an arbitrarily bad failure case, but the odds of actually seeing it in the
> real world get vanishingly small as it gets uglier.)
>
>
That is true.  Right now our DLQ redelivery policy is 5 ... so we would hit
the limit of 5 and the secondar messages would get re-executed.


> > So if we have 10 servers, each thread has ten sessions.  and if prefetch
> is
> > 1 then that means I prefetch 10 total messages.  If each message takes 30
> > seconds to execute that thread will take a while to handle all ten.
> > This leads to significant latency.
> >
>
> If I'm understanding correctly, you've got a single client consuming one
> message at a time while consuming from N brokers that are presumably not
> networked (otherwise why would you connect to more than one of them)?
> Why?  (Among other things, why not just network the brokers and simplify
> your use-case?)
>
>
I had thought about this initially but ruled out running a network of
brokers for a number of reasons.

1. I wanted to keep things simple so I could port to another queue system
in the future.

2. Our system is already 'ridiculously parallelizable' so just splitting
the brokers up works easily enough.

3. I thought my initial implementation would be fine :)

4. The documentation for network of brokers wasn't really all there and I
had a lot of residual questions and just assumed that this functionality
wasn't really all there.

I'm still kind of unclear how this works.  For example I would want sharded
and replicated brokers where we had 1/Nth of our data in one shard but that
shard also has a primary and backup replica.

We basically get this now as soon as I turn on replication but then again I
*really* understand the topology and how it works.

With the current system/documentation I'm unclear if this is even possible.

Also, the mentions of JXTA make me think the documentation hasn't been
updated in a decade ;)

Kevin


--

We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>