----- Original Message -----
Hi Dave,
We have a fairly large vmcore (around 250GB) that has a very long kmem
cache we are trying to determine whether a loop exists in it. The list
has literally billions of entries. Before you roll your eyes hear me
out.
Just running the following command
crash> list -H 0xffff8ac03c81fc28 > list-yeller.txt
Seems to increase the memory of crash usage over time very
significantly, to the point that we have the following with top output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25522 dwysocha 20 0 11.2g 10g 5228 R 97.8 17.5 1106:34 crash
When I started the
command yesterday it was adding around 4 million entries to the file
per minute. At the time I estimated the command would finish in around
10 hours and I could use it to determine if there was a loop in the
list or not. But today has slowed down to less than 1/10th that, to
around 300k entries per minute.
Is this type of memory usage with list enumeration expected or not?
I have not yet begun to delve into the code, but figured you might have
a gut feel whether this is expected and fixable or not.
Yes, by default all list entries encountered are put in the built-in
hash queue, specifically for the purpose of determining whether there
are duplicate entries. So if it's still running, it hasn't found any.
To avoid the use of the hashing feature, try entering "set hash off"
before kicking off the command. But of course if it finds any, it
will loop forever.
Dave
Thanks.
--
Crash-utility mailing list
Crash-utility(a)redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility