ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

paulgale
Hi,

I'm using ActiveMQ 5.8.0 on RHEL 6.1

I have noticed the following exception appearing in my broker's log
file. It appears to be related to the once hourly check for expired
messages, as these occur at _exactly_ the same time past the hour, as
given by the expireMessagesPeriod attribute on our topics. I have no
other scheduled checks that go off hourly. Scheduler support is set to
false for the broker (if that matters). The message store is located
on an NFS v3 mount.

Any thoughts as to what could be causing this or if is this a
well-known issue that's fixed in 5.9.0? I couldn't find anything in
JIRA about this other than AMQ-3906 which was not resolved although
'Chunk stream does not exist' has cropped up a few times it appears to
have been fixed in earlier releases.

INFO   | jvm 1    | 2013/09/26 05:01:08.362 | WARN  | Topic
              | Failed to browse Topic: Auction.Event | ActiveMQ
Broker[queue01.qa1] Scheduler
INFO   | jvm 1    | 2013/09/26 05:01:08.362 | java.io.EOFException:
Chunk stream does not exist, page: 19 is marked free
INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470)
INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)
INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522)
INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
org.apache.activemq.broker.region.Topic.access$100(Topic.java:65)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
org.apache.activemq.broker.region.Topic$6.run(Topic.java:703)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
java.util.TimerThread.mainLoop(Timer.java:555)
INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
java.util.TimerThread.run(Timer.java:505)

Thanks,
Paul
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

paulgale
Anyone have any some thoughts about this?

Thanks,
Paul

On Tue, Oct 1, 2013 at 12:50 AM, Paul Gale <[hidden email]> wrote:

> Hi,
>
> I'm using ActiveMQ 5.8.0 on RHEL 6.1
>
> I have noticed the following exception appearing in my broker's log
> file. It appears to be related to the once hourly check for expired
> messages, as these occur at _exactly_ the same time past the hour, as
> given by the expireMessagesPeriod attribute on our topics. I have no
> other scheduled checks that go off hourly. Scheduler support is set to
> false for the broker (if that matters). The message store is located
> on an NFS v3 mount.
>
> Any thoughts as to what could be causing this or if is this a
> well-known issue that's fixed in 5.9.0? I couldn't find anything in
> JIRA about this other than AMQ-3906 which was not resolved although
> 'Chunk stream does not exist' has cropped up a few times it appears to
> have been fixed in earlier releases.
>
> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | WARN  | Topic
>               | Failed to browse Topic: Auction.Event | ActiveMQ
> Broker[queue01.qa1] Scheduler
> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | java.io.EOFException:
> Chunk stream does not exist, page: 19 is marked free
> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470)
> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)
> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522)
> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> org.apache.activemq.broker.region.Topic.access$100(Topic.java:65)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> org.apache.activemq.broker.region.Topic$6.run(Topic.java:703)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> java.util.TimerThread.mainLoop(Timer.java:555)
> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> java.util.TimerThread.run(Timer.java:505)
>
> Thanks,
> Paul
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

gtully
I really thought all of these kahaDB index corruption issues were
sorted, but if you can reproduce in 5.9 we have a problem that needs
work. the key to any resolution is a reproducible test case.

In any event, you should be able to recover by rebuilding the index,
just deleting the db.data file before a restart.

On 3 October 2013 17:04, Paul Gale <[hidden email]> wrote:

> Anyone have any some thoughts about this?
>
> Thanks,
> Paul
>
> On Tue, Oct 1, 2013 at 12:50 AM, Paul Gale <[hidden email]> wrote:
>> Hi,
>>
>> I'm using ActiveMQ 5.8.0 on RHEL 6.1
>>
>> I have noticed the following exception appearing in my broker's log
>> file. It appears to be related to the once hourly check for expired
>> messages, as these occur at _exactly_ the same time past the hour, as
>> given by the expireMessagesPeriod attribute on our topics. I have no
>> other scheduled checks that go off hourly. Scheduler support is set to
>> false for the broker (if that matters). The message store is located
>> on an NFS v3 mount.
>>
>> Any thoughts as to what could be causing this or if is this a
>> well-known issue that's fixed in 5.9.0? I couldn't find anything in
>> JIRA about this other than AMQ-3906 which was not resolved although
>> 'Chunk stream does not exist' has cropped up a few times it appears to
>> have been fixed in earlier releases.
>>
>> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | WARN  | Topic
>>               | Failed to browse Topic: Auction.Event | ActiveMQ
>> Broker[queue01.qa1] Scheduler
>> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | java.io.EOFException:
>> Chunk stream does not exist, page: 19 is marked free
>> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
>> org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> org.apache.activemq.broker.region.Topic.access$100(Topic.java:65)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> org.apache.activemq.broker.region.Topic$6.run(Topic.java:703)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> java.util.TimerThread.mainLoop(Timer.java:555)
>> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
>> java.util.TimerThread.run(Timer.java:505)
>>
>> Thanks,
>> Paul



--
http://redhat.com
http://blog.garytully.com
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

paulgale
Hi Gary,

I'd love to be able to provide a failing unit test for you. However, this
exception occurs infrequently and so far only on one machine. I made a
backup copy of the message store prior to rebuilding it if you think that
examining it might help? It contains only a single 32mb journal file, in
addition to the other files. I'm assuming you have the tooling to be able
to dive in. I wouldn't know where to start. Give me some pointers and I'll
gladly help out.

I suppose I could run some unit test code coverage tools to see if there
are any areas that stand out that lack coverage. What packages should I
zero in on? Is there any support for code coverage analysis in the activemq
build tools? Just wondering.

Thanks,
Paul


On Thu, Oct 3, 2013 at 5:37 PM, Gary Tully <[hidden email]> wrote:

> I really thought all of these kahaDB index corruption issues were
> sorted, but if you can reproduce in 5.9 we have a problem that needs
> work. the key to any resolution is a reproducible test case.
>
> In any event, you should be able to recover by rebuilding the index,
> just deleting the db.data file before a restart.
>
> On 3 October 2013 17:04, Paul Gale <[hidden email]> wrote:
> > Anyone have any some thoughts about this?
> >
> > Thanks,
> > Paul
> >
> > On Tue, Oct 1, 2013 at 12:50 AM, Paul Gale <[hidden email]>
> wrote:
> >> Hi,
> >>
> >> I'm using ActiveMQ 5.8.0 on RHEL 6.1
> >>
> >> I have noticed the following exception appearing in my broker's log
> >> file. It appears to be related to the once hourly check for expired
> >> messages, as these occur at _exactly_ the same time past the hour, as
> >> given by the expireMessagesPeriod attribute on our topics. I have no
> >> other scheduled checks that go off hourly. Scheduler support is set to
> >> false for the broker (if that matters). The message store is located
> >> on an NFS v3 mount.
> >>
> >> Any thoughts as to what could be causing this or if is this a
> >> well-known issue that's fixed in 5.9.0? I couldn't find anything in
> >> JIRA about this other than AMQ-3906 which was not resolved although
> >> 'Chunk stream does not exist' has cropped up a few times it appears to
> >> have been fixed in earlier releases.
> >>
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | WARN  | Topic
> >>               | Failed to browse Topic: Auction.Event | ActiveMQ
> >> Broker[queue01.qa1] Scheduler
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.362 | java.io.EOFException:
> >> Chunk stream does not exist, page: 19 is marked free
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:470)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.362 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:262)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getRoot(BTreeIndex.java:174)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.index.BTreeIndex.iterator(BTreeIndex.java:232)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex$MessageOrderIterator.<init>(MessageDatabase.java:2757)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.iterator(MessageDatabase.java:2739)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$3.execute(KahaDBStore.java:526)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:779)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recover(KahaDBStore.java:522)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.363 |   at
> >>
> org.apache.activemq.store.ProxyTopicMessageStore.recover(ProxyTopicMessageStore.java:62)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >> org.apache.activemq.broker.region.Topic.doBrowse(Topic.java:578)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >> org.apache.activemq.broker.region.Topic.access$100(Topic.java:65)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >> org.apache.activemq.broker.region.Topic$6.run(Topic.java:703)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >>
> org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >> java.util.TimerThread.mainLoop(Timer.java:555)
> >> INFO   | jvm 1    | 2013/09/26 05:01:08.364 |   at
> >> java.util.TimerThread.run(Timer.java:505)
> >>
> >> Thanks,
> >> Paul
>
>
>
> --
> http://redhat.com
> http://blog.garytully.com
>
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

khandelwalanuj
I am also seeing the same exception with ActiveMQv5.10. It comes infrequent and non-reproducible.

I have already posted http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305 


ActiveMQGods can you please help us out here. ?



Thanks,
Anuj
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

paulgale
In my particular case I fixed it when I realized that I had the NFS mount
settings for the mount where the KahaDB message store was located
mis-configured. Since correcting the settings I've not had a single
problem.

Are you using NFS?


Thanks,
Paul

On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <[hidden email]>
wrote:

> I am also seeing the same exception with ActiveMQv5.10. It comes infrequent
> and non-reproducible.
>
> I have already posted
>
> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>
>
> ActiveMQGods can you please help us out here. ?
>
>
>
> Thanks,
> Anuj
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

paulgale
All of the following is assuming you're using Linux. I'm using RHEL 6.3 to
mount an NFSv3 based device using autofs.

I should have added that the issue for me was that I had specified the
wrong block size values for the rsize/wsize parameters in the autofs mount
configuration for the device I was mounting.

I was operating in the mistaken belief that the larger the value for these
parameters the better. Therefore I set their values to be 256K (in bytes).
Problems with the message store followed.

What I should have done, and ended up doing, was rather than guess the
value of the device's block size was to determine the device's _actual_
block size. You can either ask the administrator for the device what its
block size is or use the stat command.

If you really want to play it safe you could always use the default block
size for a device that supports NFSv3 which is quite conservative at 8192
bytes (I think - look it up). However, if the device you're mounting can
support larger block sizes then the stat command is how you would find that
out.

First, mount the device using a _very_ conservative block size value, say
1024 bytes. Second, run the stat command on the mount point to see what the
device's block size actually is. It might be the default 8192 or it could
be larger. Either way you'll know.

Here's an example. Say your local mount point is /NFS then the stat command
to use is:

stat -f /NFS

The output should look something like:

File: "/NFS"
ID: 0        Namelen: 255    Type: nfs
Block size: 32768      Fundamental block size: 32768
Blocks: Total: 330424288  Free: 178080429  Available: 178080429
Inodes: Total: 257949694  Free: 246974355

The output indicates the block size in bytes (32768) for the device. This
is the value that should be plugged into the rsize/wsize parameters for the
mount's definition.

I hope this helps.

Thanks,
Paul


Thanks,
Paul

On Wed, Sep 10, 2014 at 10:27 AM, Paul Gale <[hidden email]> wrote:

> In my particular case I fixed it when I realized that I had the NFS mount
> settings for the mount where the KahaDB message store was located
> mis-configured. Since correcting the settings I've not had a single
> problem.
>
> Are you using NFS?
>
>
> Thanks,
> Paul
>
> On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <
> [hidden email]> wrote:
>
>> I am also seeing the same exception with ActiveMQv5.10. It comes
>> infrequent
>> and non-reproducible.
>>
>> I have already posted
>>
>> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>>
>>
>> ActiveMQGods can you please help us out here. ?
>>
>>
>>
>> Thanks,
>> Anuj
>>
>>
>>
>> --
>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

gtully
thanks for sharing this info Paul :-)

On 10 September 2014 17:33, Paul Gale <[hidden email]> wrote:

> All of the following is assuming you're using Linux. I'm using RHEL 6.3 to
> mount an NFSv3 based device using autofs.
>
> I should have added that the issue for me was that I had specified the
> wrong block size values for the rsize/wsize parameters in the autofs mount
> configuration for the device I was mounting.
>
> I was operating in the mistaken belief that the larger the value for these
> parameters the better. Therefore I set their values to be 256K (in bytes).
> Problems with the message store followed.
>
> What I should have done, and ended up doing, was rather than guess the
> value of the device's block size was to determine the device's _actual_
> block size. You can either ask the administrator for the device what its
> block size is or use the stat command.
>
> If you really want to play it safe you could always use the default block
> size for a device that supports NFSv3 which is quite conservative at 8192
> bytes (I think - look it up). However, if the device you're mounting can
> support larger block sizes then the stat command is how you would find that
> out.
>
> First, mount the device using a _very_ conservative block size value, say
> 1024 bytes. Second, run the stat command on the mount point to see what the
> device's block size actually is. It might be the default 8192 or it could
> be larger. Either way you'll know.
>
> Here's an example. Say your local mount point is /NFS then the stat command
> to use is:
>
> stat -f /NFS
>
> The output should look something like:
>
> File: "/NFS"
> ID: 0        Namelen: 255    Type: nfs
> Block size: 32768      Fundamental block size: 32768
> Blocks: Total: 330424288  Free: 178080429  Available: 178080429
> Inodes: Total: 257949694  Free: 246974355
>
> The output indicates the block size in bytes (32768) for the device. This
> is the value that should be plugged into the rsize/wsize parameters for the
> mount's definition.
>
> I hope this helps.
>
> Thanks,
> Paul
>
>
> Thanks,
> Paul
>
> On Wed, Sep 10, 2014 at 10:27 AM, Paul Gale <[hidden email]> wrote:
>
>> In my particular case I fixed it when I realized that I had the NFS mount
>> settings for the mount where the KahaDB message store was located
>> mis-configured. Since correcting the settings I've not had a single
>> problem.
>>
>> Are you using NFS?
>>
>>
>> Thanks,
>> Paul
>>
>> On Tue, Sep 9, 2014 at 2:49 AM, khandelwalanuj <
>> [hidden email]> wrote:
>>
>>> I am also seeing the same exception with ActiveMQv5.10. It comes
>>> infrequent
>>> and non-reproducible.
>>>
>>> I have already posted
>>>
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-exception-quot-Failed-to-browse-Topic-quot-td4683227.html#a4683305
>>>
>>>
>>> ActiveMQGods can you please help us out here. ?
>>>
>>>
>>>
>>> Thanks,
>>> Anuj
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://activemq.2283324.n4.nabble.com/ActiveMQ-5-8-0-java-io-EOFException-Chunk-stream-does-not-exist-page-19-is-marked-free-tp4672029p4685397.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>
>>



--
http://redhat.com
http://blog.garytully.com
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

khandelwalanuj
In reply to this post by paulgale
I am using NFS.

amqgod@txnref1.nyc:/u/amqgod> stat -f ~/kahadb/
  File: "/u/amqgod/kahadb/"
    ID: 0        Namelen: 255     Type: nfs
Block size: 65536      Fundamental block size: 65536
Blocks: Total: 245760     Free: 210550     Available: 210550
Inodes: Total: 1048576    Free: 1043878

Is this issue is because of NFS ? If this is because of NFS it should happen for all destinations. I can see that only for 2,3 topics it is showing this exception continuously. I think it is because of some kahadb corruption. Can you guys please take a looks at this ?


Thanks,
Anuj
Reply | Threaded
Open this post in threaded view
|

Re: ActiveMQ 5.8.0: java.io.EOFException: Chunk stream does not exist, page: 19 is marked free

bharadwaj nakka
This is known bug in Jboss fuse 6.1 and the issue is fixed in JBoss Fuse 6.1
R2P5 rollup patch5.



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html