Horms wrote:
On Fri, Jun 23, 2006 at 12:11:50PM -0400, Dave Anderson wrote:
I added some code to kdump to have it record CR3 for dom0. This is
done using a second note in the per-cpu notes area, which for now
just stores a single 4byte entity, the mfn of that CPU in dom0
if it was present in dom0.

I have made a dump available that includes this. The tarball
also includes the kernels, xen, symbol files, and patches to xen.
If you want to find the cr3 saving code its in ./arch/x86/crash.c

I plan to post this update to xen-devel shortly, hopefully tomorrow,
after upporting to the latest xen tree (I'm still working off about
3 weeks ago's tree).

http://packages.vergenet.net/tmp/xen-unstable.hg+kexec-20060616.tar.bz2
OK -- here's a proof-of-concept running the dom0 vmlinux against the
xen kdump:

# crash vmlinux vmcore

crash 4.0-2.31-rc1
Copyright (C) 2002, 2003, 2004, 2005, 2006 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005 Fujitsu Limited
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.

GNU gdb 6.1
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you a re
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i686-pc-linux-gnu"...

KERNEL: vmlinux
DUMPFILE: vmcore
CPUS: 2
DATE: Wed Jun 14 15:05:01 2006
UPTIME: 00:04:40
LOAD AVERAGE: 1.22, 0.39, 0.13
TASKS: 94
NODENAME: aiko.lab.ultramonkey.org
RELEASE: 2.6.16.13-xen
VERSION: #7 SMP Fri Jun 9 16:25:32 JST 2006
MACHINE: i686 (866 Mhz)
MEMORY: 887.4 MB
PANIC: "SysRq : Trigger a crashdump"
PID: 3949
COMMAND: "do_kdump"
TASK: f3e64030 [THREAD_INFO: f3dba000]
CPU: 1
STATE: TASK_RUNNING (SYSRQ)

crash> bt -a
PID: 0 TASK: c02ce460 CPU: 0 COMMAND: "swapper"
#0 [c030f f34] schedule at c028e648
#1 [c030ffb0] cpu_idle at c0103e9f

PID: 3949 TASK: f3e64030 CPU: 1 COMMAND: "do_kdump"
#0 [f3dbbed8] crash_kexec at c0140c45
#1 [f3dbbf28] __handle_sysrq at c01f54e4
#2 [f3dbbf54] write_sysrq_trigger at c019cbff
#3 [f3dbbf6c] vfs_write at c0168dbf
#4 [f3dbbf90] sys_write at c0169736
#5 [f3dbbfb8] system_call at c0105542
EAX: 00000004 EBX: 00000001 ECX: 080f8408 EDX: 00000002
DS: 007b ESI: 00000002 ES: 007b EDI: b7f007c0
SS: 007b ESP: bfb5ffc8 EBP: bfb5ffe4
CS: 0073 EIP: b7e93028 ERR: 00000004 EFLAGS: 00000246
crash>

As I discussed earlier, given that this is a writable-page-table
kernel, having any legitimate CR3 (I just use the first one found
in the ELF header), I first get the value of "max_pfn" (x86),
and then the value of "phys_to_machine_mapping", which makes up
dom0's "phys_to_machine_mapping[max _pfn]" array. From that, all
subsequent pseudo-physical address requests can be translated
into the physical address for the existing read_netdump() function
to access. As we talked about before, this won't work for
shadow-page-table kernels; for those I would need to having the
"pfn_to_mfn_frame_list_list" mfn value from the shared,
per-domain, "arch_shared_info" structure(s). With that single
value, the phys_to_machine_mapping[] array can be resurrected
for both writable- and shadow-page-table kernels.

Also, with either the cr3 or pfn_to_mfn_frame_list_list schemes,
if those values were made available for *all* of the other domains
instead of just dom0, then we could run a crash session against
any of the domains on the system.

As we now have a whole extra crash note devoted to storing cr3 and only
cr3 it would be trivial to put pfn_to_mfn_frame_list_list in there as
well. Would that be sufficient? Would you need information about what
paging scheme was in effect in there as well. We have roughly 800 bytes
of unused crash note space, so adding a few extra values should be no
problem whatsoever.

I don't have a strong opinion either way.  You could leave the 0x10000001 note
definition to mean this is a "writable-page-table cr3", and perhaps 0x10000002
to pass a "writable-page-table pfn_to_mfn_frame_list_list" and 0x10000003 to
pass a "shadow-page-table pfn_to_mfn_frame_list_list".  Or whatever, I don't
really care one way or another, just that the information gets passed in one
manner or another.  Given that I've got the cr3 support in place, it would be
nice to leave it as one option for bootstrapping the whole deal.  But also,
in the meantime, I can start work on the "pfn_to_mfn_frame_list_list" option
by simply kludging its mfn value in place of the cr3 mfn with the vmcore I've been
working with.  (I think...)

But we will need to know whether the kernel is writable- or shadow-page-table
enabled, and I suppose that would be helpful in the ELF header if there is no
way to tell by simply looking at the vmcore's symbols for clues.  At this point,
I'm just defaulting to writable-page-table support, and to get shadow-page-table
support I've added a command line switch.  But I would think there should be some
kind of clue in the vmcore, for example, a symbol that does or doesn't exist depending
upon which scheme is being used, that should make its detection automatic.
We (Red Hat) are only using writable-page-table kernels at the moment, so
I really haven't looked into it.  With your vmcores, we do have the liberty of being
able to pass that information in the  ELF header.  On the other hand, that info
is probably not readily available in guest xendump or guest "xm save" output
files, so again, I'd like to be able to figure it out some other way.



In any case, this is pretty cool for starters...

Indeed it is, and the info you have above does seem to be correct.
Off the top of my head anyway.

BTW, I've created a new n_type value to handle this particular
invocation, which I understand will be subject to change.
Note that the spelling in your PT_NOTE is a bit strange:

crash> help -n
...
Elf32_Nhdr:
n_namesz: 18 ("Xen Domanin-0 CR3")
n_descsz: 4
n_type: 10000001 (NT_XEN_KDUMP_CR3)
00027227
...
crash>

Anyway, I'll do the same thing for x86_64 (untested) and
update the crash release so you'll have something to work
with.

Cool. I'd offer you an x86_64 vmcore, but we actually haven't
sucessfully kdumped x86_64 xen yet. The code should work, but...
This may well relate to linux x86_64 kdump being generally flakey
and highly cpu rev dependant. We are hoping to investigate this
in the not to distant future.

In any case, I can give you a vmlinux, xen and the works if you like.
And the x86_64 are included in the the tarball you already have
with the vmcore you used above. Mmm, actually, they might be slightly
old, let me know if you want the lastest and greatest. But at any
rate, it doesn't quite work :(


No, I'll wait for you to get it to work...

Thanks,
  Dave