(2013/04/12 9:24), James Washer wrote:
Machines are getting ever bigger. I routinely look at crash dumps
from
systems with 2TB or more of memory. I'm finding I'm wasting too much
time waiting on crash to complete a command. For example "kmem -s" took
close to an hour on the dump I'm looking at now.
Has anyone ever looked into mutli-threading crash? Given the kmem -s
example above, a thread could be created for each cache (up to some
defined limit of threads).
Things like "foreach" could spawn threads. I'm sure there are lots of
other opportunities.
Yes, I know, it's open source, I should just go do it myself. Still, I'd
like to hear pro's and con's on this idea.
FYI. Some performance improvement work for kmem -s and kmem -p was done
on v6.0.3. If you are using crash utility older than 6.0.3, using latest
version might be enough for you.
From ChangeLog:
6.0.3 - Fix to gdb-7.3.1/bfd/bfdio.c to properly zero out a complete
struct stat with a corrected memset argument; caught when
compiling with the Clang Static Analyzer.
(idoenmez(a)suse.de)
<cut>
- Significant speed increase of the "kmem -p" command,
especially on large-memory systems.
(qiaonuohan(a)cn.fujitsu.com)
<cut>
- Performance increase for the "kmem -s <address>" option on
kernels configured with CONFIG_SLAB, most notably on kernels
whose kmem_cache.array[NR_CPUS] array is several pages in
size.
(qiaonuohan(a)cn.fujitsu.com)
--
Thanks.
HATAYAMA, Daisuke