It is possible that some uber debugger has done it in his local build, but
expect, from my experience, also seems to be lot of work put up for the
non-debug things while getting it to work. I liked pykdump.so better and
does the job with minimal effort. And you always have either quick script
or a standard library of commands while debugging.

Thanks,
Ratnam

On Fri, Dec 11, 2009 at 3:29 PM, James Washer <washer@trlp.com> wrote:
Often, I'd like to be able to run one crash command, massage the data
produced, and run follow up commands using the massaged data

A (possibly crazy) example, run the mount command, collect the
superblocks addresses, for each super_block, get the s_inodes list head,
traverse each list head to the inode, for each inode, find it's i_data
(address_space) and get the number of pages.. Now.. sum these up and
print a table of filesystem mounts points and the number of cached pages
for each... Perhaps, I'd even traverse the struct pages to provide a
count of clean and dirty pages for each file system.

I do do this by hand. (i.e. mount > mount.file; perlscript mount.file >
crash-script-step-1, then, back in crash I do ". crash-script-step-1 >
data-file-2; and repeat with more massaging).. This is gross, prone to
error, and not terribly fast.

I'd love to start crash as a child of perl and either use expect (which
is a bit of a hack) or better yet, have some machine interface to crash
(ala gdbmi)...

I know.. it's open source, I should write it myself. I just don't want
to reinvent the wheel, if someone else already has done something like
this.

Perhaps I need to learn sial. But what little sial I've looked at seems
a bit low level for my needs.

Has anyone had much luck using expect with crash?

thanks

 - jim


--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility