Re: [Crash-utility] Question on online/present/possible CPUS
by Hagen, Jeffrey
Hi Petr and Dave,
I have a couple of comments on Petr's email regarding CPU count.
When the dump is the result of an NMI (nmi switch pressed) due to a hung
system, one often needs to analyze the state and backtrace for all the
CPU's. Since the kernel halts all but CPU0, the crash utility cannot
see the other "offline" CPU's.
This behavior has changed for the x86 architecture somewhere between
2.6.16 (SLES10) and 2.6.32 (SLES11) due to the removal of the x8664_pda
structure.
The function x86_64_init (in x86_64.c) now calls x86_64_per_cpu_init
which doesn't count the offline CPUS when calculating the number of
CPU's. Previously, x86_64_cpu_pda_init (called if x8664_pda exists),
didn't check for online/offline status.
Regarding #3 in Petr's email. It appears that the set command won't
accept a value >= kt_cpus (number of CPUS). It doesn't check if the CPU
is offline or not.
Thanks,
Jeff Hagen
>
> Hi all,
>
> before making a larger cleanup, I want to ask here for your opinion.
It
> seems that there is quite a bit of confusion about the meaning of CPU
> count printed out by the crash utility.
>
> 1. Number of CPUs
>
> Some people think that crash should always output the number of CPUs
in
> the system (ie. a quad-core server should always output 'CPUS: 4'),
> while other people think that only online CPUs should be counted.
>
> 2. CPU numbering
>
> For example, if there are 4 CPUs in the system, but some of them are
> taken offline (e.g. CPU 1 and CPU 3), _and_ crash output the number of
> online CPUs, it would print out 'CPUS: 2'. It's not easy to find out
> that valid CPU numbers are 0 and 2 in this case.
Hi Petr,
For all but ppc64, the number shown by the initial banner and the
"sys" command is essentially "the-highest-cpu-number-plus-one".
For ppc64 (as requested and implemented by the IBM/ppc64 maintainers),
it shows the number of online cpus. There's reasons for doing it
either of the two ways, but I'm on vacation now, and you can research
the list archives for the various arguments for-and-against doing it
either way. Check the changelog.html for when it was changed for
ppc64, and then cross-reference the revision date with the list
archives.
> 3. Examining offline CPU
>
> Sometimes, it may be useful to examine the state of an offline CPU.
Now,
> I know that the saved state is most likely stale, but it can be useful
> in some cases (e.g. a crash after dropping to kdb). The crash utility
> currently refuses to select an offline CPU with 'set -c #'. Are there
> any concerns about allowing it?
I tend to agree with you, but the only thing that's useful and
available from an offline cpu is the swapper task for that cpu
and the runqueue for that cpu. And both of those entities are
readily accessible if you really need them. Although I don't know
anything about kdb status, so maybe there's something of per-cpu
interest, but I don't know why it would be necessary to "set"
that cpu?
In any case, like I said before, I'm just temporarily online while
on vacation, and will be back to work on the 9th.
Thanks,
Dave
14 years, 3 months
Re: [Crash-utility] Linux Banner string missing.
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Dave,
>
> I regenerate the vmcore dump file to make sure I used the correct
> version of dom0 which we have unstripped kernel from critix.
>
> The p2m_mfn pointer is still empty... :( I attached the output we have
> .. :(
>
> Too bad... It seems that I am going no where.
> &
> Thanks for your help.
Kevin,
The p2m_mfn value still needs to be overridden on the command line, so your
latest results are not surprising. But there's one more thing you might
try. Looking at your latest debug output:
xen_kdump_data:
flags: 4 (KDUMP_MFN_LIST)
p2m_mfn: 0 <== incorrect
cr3: 0
last_mfn_read: ffffffff
last_pmd_read: ffffffff
page: 0
accesses: 0
cache_hits: 0
p2m_frames: 0
xen_phys_start: 6f969 <== incorrect
Because of the lack of support of 32-bit dom0's on 64-bit hypervisors,
the p2m_mfn and xen_phys_start values are being loaded as if the
note contained 32-bit values:
Elf64_Nhdr:
n_namesz: 4 ("Xen")
n_descsz: 80
n_type: 1000001 (XEN_ELFNOTE_CRASH_INFO)
00000003 00000000 00000004 00000000
9f7eb409 00000000 9f7ed168 00000000
9f7ed138 00000000 9f7eb3c5 00000000
9f7eb3e2 00000000 00000000 00000000
9f600000 00000000 0006f969 00000000
So in your latest dumpfile, they should be set to:
p2m_mfn: 6f969
xen_phys_start: 9f600000
And because of the bogus p2m_mfn address of 0, you get bogus mfn
values in the dump of mfns shown later on here:
x86_xen_kdump_p2m_create: p2m_mfn: 0
00000000: 7fed9050 7fed904c 7fed9058 7fed9054
00000010: 7fed9060 7fed905c f00016b3 f00016b4
00000020: f000fea5 f000e987 f00016b4 f00016b4
00000030: 98400b7c f00016a7 f000ef57 f00016b4
Those 16 values above should be mfn values.
You can override both the bogus p2m_mfn and xen_phys_start values on
the crash command line like this:
# crash --p2m_mfn 6f969 --xen_phys_start 9f60000 vmlinux vmcore
I'm guessing that you will probably still see bogus data in the list
of mfn values from x86_xen_kdump_p2m_create debug dump above, but it's
worth a shot. But if you do see the correct mfn values there, then
the correct xen_phys_start value will be required after that.
> P.S.
> I did cat /proc/iomem before I generate the vmcore dump.
>
> it shows
>
> 9ead4000-9fb2cfff : System RAM
> 9f700000-9f8dca7f : Hypervisor code and data
> 9f899ca0-9f899e93 : Crash note
> 9f899e94-9f89a027 : Crash note
> 9f89a088-9f89a21b : Crash note
> 9f89a27c-9f89a40f : Crash note
>
> is this information helpful to you ?
Not so much...
Dave
14 years, 3 months
Re: [Crash-utility] Linux Banner string missing.
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Hey Dave and list,
>
>
> I am the guy who is try to open the 64 bit vmcore dump with 64 bit xen
> kernel and 32 bit dom0 kernel.
>
>
> Finally, I have received the dom0 kernel with symbols, and I tried to
> open the vmcore with crash (I patched with your "special patch").
>
>
> But I have a new problem, the crash utility exits with an error "could
> not read linux_banner string"
>
>
> <readmem: c034c000, KVADDR, "accessible check", 4, (ROE|Q), ffff82e4>
> addr: c034c000 paddr: 34c000 cnt: 4
> <readmem: c034c000, KVADDR, "readstring characters", 1499, (ROE|Q),
> ffff72d0>
> addr: c034c000 paddr: 34c000 cnt: 1499
> WARNING: cannot read linux_banner string
> linux_banner:
>
>
> crash32: boot/System.map-2.6.27.42-0.1.1.xs5.6.0.44.111158xen and
> vmcore-2010-09-15-19-29-03 do not match!
>
> Have you seen these error message before ? and Thanks for any help and
> suggestion.
>
>
> yours
> Kevin F LI
>
>
> ps.
> I attached the log file generated with "-d 31" option.
Looking at the attached log file, things are regressing even further
than what was shown in your original reports. This time the vmcore
is not even being recognized as a Xen-generated core dump:
...
page_size: 0
switch_stack: 0
xen_kdump_data: (unused) <== this time
num_prstatus_notes: 0
vmcoreinfo: 0
size_vmcoreinfo: 0
nt_prstatus_percpu:
...
In your older post, it at least showed some xen_kdump_data:
...
page_size: 0
switch_stack: 0
xen_kdump_data:
flags: 4 (KDUMP_MFN_LIST)
p2m_mfn: 0
cr3: 0
last_mfn_read: ffffffff
last_pmd_read: ffffffff
page: 0
accesses: 0
cache_hits: 0
p2m_frames: 0
xen_phys_start: 7db00000
xen_major_version: 3
xen_minor_version: 0
p2m_mfn_frame_list: 0
num_prstatus_notes: 4
vmcoreinfo: 0
size_vmcoreinfo: 0
nt_prstatus_percpu:
084c8e10 084c9004 084c9198 084c932c
...
That may be due to the use of a System.map file, which forces
the crash utility to take some short-cuts during initialization.
Or again, it could be a side issue of the lack of crash utility
support for 32-bit vmcores taken from 64-bit Xen hypervisors.
But since you've apparently got the debuginfo-full dom0 vmlinux
file, then there's not need to use the System.map file. What
happens when you just run: "crash -d1 vmlinux vmcore"?
Dave
14 years, 3 months
Linux Banner string missing.
by LI, Feng
Hey Dave and list,
I am the guy who is try to open the 64 bit vmcore dump with 64 bit xen
kernel and 32 bit dom0 kernel.
Finally, I have received the dom0 kernel with symbols, and I tried to open
the vmcore with crash (I patched with your "special patch").
But I have a new problem, the crash utility exits with an error "could not
read linux_banner string"
<readmem: c034c000, KVADDR, "accessible check", 4, (ROE|Q), ffff82e4>
addr: c034c000 paddr: 34c000 cnt: 4
<readmem: c034c000, KVADDR, "readstring characters", 1499, (ROE|Q),
ffff72d0>
addr: c034c000 paddr: 34c000 cnt: 1499
WARNING: cannot read linux_banner string
linux_banner:
crash32: boot/System.map-2.6.27.42-0.1.1.xs5.6.0.44.111158xen and
vmcore-2010-09-15-19-29-03 do not match!
Have you seen these error message before ? and Thanks for any help and
suggestion.
yours
Kevin F LI
ps.
I attached the log file generated with "-d 31" option.
14 years, 3 months
Re: [Crash-utility] How to open 32 bit dom0 kdump....
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Sorry Dave,
>
>
> My mistake. Actually, we have to use 64 bit hypervisor... The file I
> show to you was created with xen-64 bit and dom0 32 bit. :(
>
> Sorry about my mistakes.
>
> When I tried the crash with --p2m_mfn 000bf969 option.
>
> x86_xen_kdump_p2m_create: p2m_mfn: bf969
> 00000000: cccccccc cccccccc cccccccc cccccccc
> 00000010: cccccccc cccccccc cccccccc cccccccc
> 00000020: cccccccc cccccccc cccccccc cccccccc
> 00000030: cccccccc cccccccc cccccccc cccccccc
>
>
> crash32: read error: physical address: cccccccc000 type: "xen kdump p2m mfn list page"
OK, so we're back to the same starting point, that being that
it's evident that crash does not support 32-bit dom0 vmcores
generated by a 64-bit hypervisor.
The page referenced by the p2m_mfn pointer contains a set of mfn
values that is needed to recreate the kernel's phys_to_machine_mapping[]
array, which is needed to translate the kernel's pseudo-physical pfns to
real machine physical addresses (mfns). Clearly something is out-of-sync
given the contents of the page at bf969000, but I have no idea what that
could be.
In any case, without that essential key, I've pretty much run out of
things to suggest.
Again, I invite the other Xen users/developers on this list to make
any suggestions, or develop support for this configuration. I actually
don't think it would require a major update, but it's not clear to me
what the problem is.
Dave
14 years, 3 months
Re: [Crash-utility] How to open 32 bit dom0 kdump....
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Hey Dave,
>
> I attached the crash -d1 output with this email...
>
> Do you think whether anything wrong with my vmcore ?
I'm not sure...
The dom0 "p2m_mfn" value required by the crash utility is
contained in the Xen XEN_ELFNOTE_CRASH_INFO note in the
vmcore header. That note contains this data structure,
as defined in "include/xen/elfcore.h" in the Xen hypervisor
source tree:
typedef struct {
unsigned long xen_major_version;
unsigned long xen_minor_version;
unsigned long xen_extra_version;
unsigned long xen_changeset;
unsigned long xen_compiler;
unsigned long xen_compile_date;
unsigned long xen_compile_time;
unsigned long tainted;
#ifdef CONFIG_X86
unsigned long xen_phys_start;
unsigned long dom0_pfn_to_mfn_frame_list_list;
#endif
} crash_xen_info_t;
When a dom0 crashes, it goes through machine_crash_shutdown()
in the hypervisor's "arch/x86/crash.c" file, where it gets a
pointer to the crash_xen_info structure, and then appends the
CONFIG_X86-only xen_phys_start and dom0_pfn_to_mfn_frame_list_list
fields:
void machine_crash_shutdown(void)
{
crash_xen_info_t *info;
local_irq_disable();
nmi_shootdown_cpus();
disable_IO_APIC();
hvm_disable();
info = kexec_crash_save_info();
info->xen_phys_start = xen_phys_start;
info->dom0_pfn_to_mfn_frame_list_list =
arch_get_pfn_to_mfn_frame_list_list(dom0);
}
And then the crash utility reads the dom0_pfn_to_mfn_frame_list_list
value, and stores it in the "p2m_mfn" field that I referenced in the
last email.
Now, looking at your crash -d1 output, here's the XEN_ELFNOTE_CRASH_INFO,
where it should have picked up the bf969 as the p2m_mfn value:
Elf64_Nhdr:
n_namesz: 4 ("Xen")
n_descsz: 80
n_type: 1000001 (XEN_ELFNOTE_CRASH_INFO)
00000003 00000000 00000004 00000000
d7beb409 00000000 d7bed168 00000000
d7bed138 00000000 d7beb3c5 00000000
d7beb3e2 00000000 00000000 00000000
d7a00000 00000000 000bf969 00000000
... [ snip ] ...
But it read it as a 0, as evidenced by the "p2m_mfn: 0" shown
below:
... [ snip ] ...
This GDB was configured as "i686-pc-linux-gnu"...
x86_xen_kdump_p2m_create: p2m_mfn: 0
... [ snip ] ...
In the meantime, I provisioned a RHEL5 32-bit x86 system with a 32-bit dom0,
and forced a crash. As expected, it created a 64-bit ELF vmcore, which
does *not* display the "mismatch" warning message like yours does. But more
importantly, the XEN_ELFNOTE_CRASH_INFO dump on my vmcore looks like this,
where the p2m_mfn is 2c199:
Elf64_Nhdr:
n_namesz: 4 ("Xen")
n_descsz: 40
n_type: 1000001 (XEN_ELFNOTE_CRASH_INFO)
00000003 00000001 0018e97f 0018e98a
00190120 0018e932 0018e94f 00000001
00000000 0002c199
Note that the fields in your vmcore are 64-bit values, while those above
in the RHEL5 are 32-bit values. That presumably is due to the fact that
you are running a 64-bit hypervisor? (whereas my RHEL5 the hypervisor is
a 32-bit) Are you *sure* that you are running a 32-bit hypervisor?
In any case, the p2m_mfn value in the vmcore header can be overridden
on the crash command line. What happens if you enter:
# crash vmlinux vmcore --p2m_mfn bf969
I'm guessing that you'll probably bump into yet another oddity, but it's
worth a shot...
Dave
14 years, 3 months
Re: [Crash-utility] How to open 32 bit dom0 kdump....
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Thanks Dave,
>
>
> I tried the patch you attached in the previous email.
>
> I am using the 32 bit crash utilities, and it seems to be able to load
> the vmcore (64bit). But I still have a problem, the physical address
> 0x7fed9050 was padding as 0x7fed9050000, and crash32 exited with error
> message
>
> <readmem: 7fed9050000, PHYSADDR, "xen kdump p2m mfn list page", 4096, (ROE), 894f538>
> addr: 7fed9050000 paddr: 7fed9050000 cnt: 4096
> crash32: read error: physical address: 7fed9050000 type: "xen kdump p2m mfn list page"
> ...
> ps.
> The output of patched crash32
>
The next problem is the p2m_mfn value that it read from the vmcore,
which is a pretty obviously incorrect value of 0, and which is
ultimately causing it to read from physical address 0:
> This GDB was configured as "i686-pc-linux-gnu"...
> GETBUF(128 -> 0)
> GETBUF(1500 -> 1)
>
>
> FREEBUF(1)
> FREEBUF(0)
> <readmem: c034e720, KVADDR, "kernel_config_data", 32768, (ROE), 898e8e0>
> addr: c034e720 paddr: 34e720 cnt: 2272
> x86_xen_kdump_p2m_create: p2m_mfn: 0
> <readmem: 0, PHYSADDR, "xen kdump p2m mfn page", 4096, (ROE), 894f538>
> addr: 0 paddr: 0 cnt: 4096
> 00000000: 7fed9050 7fed904c 7fed9058 7fed9054
> 00000010: 7fed9060 7fed905c f000859c f000ff53
> 00000020: f000fea5 f000e987 f0000c75 f0000c75
> 00000030: 99800b7c f0000c75 f000ef57 f000f545
>
>
> <readmem: 7fed9050000, PHYSADDR, "xen kdump p2m mfn list page", 4096, (ROE), 894f538>
> addr: 7fed9050000 paddr: 7fed9050000 cnt: 4096
> crash32: read error: physical address: 7fed9050000 type: "xen kdump p2m mfn list page"
>
> crash32: cannot read xen kdump p2m mfn list page
When you run with "crash -d1 ..." you should have seen a display of the p2m_mfn
value that was read from the vmcore header. Can you attach that output?
Dave
14 years, 3 months
Re: [Crash-utility] How to open 32 bit dom0 kdump....
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Thanks Dave,
>
> I had tried another combination: 32 bit Xen kernel with 32 bit Dom0
> kernel, but I have the similar issue. The vmcore file is still in 64
> bit format. (Our system has a large memory configuration 8GB-192GB),
> Is there any way I can generate elf32 vmcore file ?
>
OK, now I'm thinking mabye we've got a regression of some sort. The
bare-metal kdump procedure is designed to use the 64-bit vmcore format
all of the time because physical memory beyond the 4GB limit cannot
be referenced using the fields in a 32-bit vmcore header.
However, you can configure 32-bit by modifying /etc/sysconfig/kdump here:
# Example:
# KEXEC_ARGS="--elf32-core-headers"
KEXEC_ARGS=" --args-linux"
by making KEXEC_ARGS=" --args-linux --elf32-core-headers"
But before doing that, can you try applying the attached patch to
the crash utility?
Thanks,
Dave
> Thanks.
>
>
> On Fri, Sep 10, 2010 at 5:03 PM, Dave Anderson < anderson(a)redhat.com >
> wrote:
>
>
>
>
> ----- "Feng LI" < funglee(a)gmail.com > wrote:
>
>
> > Thanks Dave.
> >
> > I attached the output of elfread -a with this email...
>
> Hmmm -- now that I think about it, it's seems that the crash
> utility has never supported dom0 vmcores generated from this
> type of Xen hypervisor/dom0 combination.
>
> Red Hat kernel versions come with the xen.gz and vmlinuz files
> packaged together, i.e., both 64-bit or both 32-bit:
>
> # rpm -qpl kernel-xen-2.6.18-219.el5.x86_64.rpm
> /boot/.vmlinuz-2.6.18-219.el5xen.hmac
> /boot/System.map-2.6.18-219.el5xen
> /boot/config-2.6.18-219.el5xen
> /boot/symvers-2.6.18-219.el5xen.gz
> /boot/vmlinuz-2.6.18-219.el5xen
> /boot/xen-syms-2.6.18-219.el5
> /boot/xen.gz-2.6.18-219.el5 <= 64-bit
> ...
>
> # rpm -qpl kernel-xen-2.6.18-219.el5.i686.rpm
> /boot/.vmlinuz-2.6.18-219.el5xen.hmac
> /boot/System.map-2.6.18-219.el5xen
> /boot/config-2.6.18-219.el5xen
> /boot/symvers-2.6.18-219.el5xen.gz
> /boot/vmlinuz-2.6.18-219.el5xen
> /boot/xen-syms-2.6.18-219.el5
> /boot/xen.gz-2.6.18-219.el5 <= 32-bit
> ...
>
> So, it's highly unlikely that either internally to Red Hat,
> or any of our customers, would ever run such a combination.
> And I don't recall ever working with the crash utility to
> support it.
>
> I'm curious whether anybody on this list has ever done this?
>
> After all these years of Xen existence, you would think that
> somebody else would have bumped into this anomoly before...
>
> Dave
>
>
>
>
> --
> Crash-utility mailing list
> Crash-utility(a)redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
>
>
> --
> Crash-utility mailing list
> Crash-utility(a)redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
14 years, 3 months
Re: [Crash-utility] (pvops 2.6.32.21) crash: cannot read/find cr3 page
by Dave Anderson
----- "tom anderson" <xentoma(a)hotmail.com > wrote:
> if I use a pvops domU kernel version 2.6.32.18 crash works fine. However if I
> use a pvops domU kernel version 2.6.32.21 I get the error messages:
>
> crash: cannot find mfn 874307 (0xd5743) in page index
> crash: cannot read/find cr3 page
>
> Any suggestions as to what is wrong?
Hi Tom,
I can't really give you specific suggestions as to what is wrong,
but at least tell what the crash utility is encountering.
I suppose there's good news and bad news concerning this issue,
the good news being that it worked OK with 2.6.32.18, which is
fairly close to your failing 2.6.32.21. Since I've done very little
with Xen support since Red Hat dropped Xen development beyond our
RHEL5 2.6.18-era release, it's always good to hear that it actually
still worked with a 2.6.32.18 kernel. I imagine eventually something
will break in the future, and at that time I may likely require outside
assistance to keep Xen support in place.
Anyway, that all being said, in your failure case, here are the issues
at hand. The header shows this:
xc_core:
header:
xch_magic: f00febed (XC_CORE_MAGIC)
xch_nr_vcpus: 7
xch_nr_pages: 521792 (0x7f640)
xch_ctxt_offset: 1896 (0x768)
xch_index_offset: 2137305088 (0x7f64b000)
xch_pages_offset: 45056 (0xb000)
elf_class: ELFCLASS64
elf_strtab_offset: 2145653760 (0x7fe41400)
format_version: 0000000000000001
shared_info_offset: 38072 (0x94b8)
The "xch_nr_pages" indicates that the domU vmlinux kernel has 521792
pseudo-physical pages assigned to it, where those pseudo-physical pages
are backed by the Xen hypervisor by machine pages, which are the "real"
physical pages. And so when the crash utility needs to access a
pseudo-physical page used by a domU kernel, that pseudo-physical page
needs to be translated to the actual machine physical page that backs it,
and then that physical page needs to be found in the dumpfile. The PFN
(page frame number) of the pseudo-physical pages are call "pfns" and the
PFN of the machine pages are called "mfns" or "gmfns".
To match a pfn with its corresponding mfn, the kdump operation dumps an
array of pfn-to-mfn pairs in the vmcore's ".xen_p2m" section, this taken from
http://www.sfr-fresh.com/unix/misc/xen-4.0.1.tar.gz:a/xen-4.0.1/docs/misc...
".xen_p2m" section
name ".xen_p2m"
type SHT_PROGBITS
structure array of struct xen_dumpcore_p2m
struct xen_dumpcore_p2m {
uint64_t pfn;
uint64_t gmfn;
};
description
This elements represents the frame number of the page
in .xen_pages section.
pfn: guest-specific pseudo-physical frame number
gmfn: machine physical frame number
The size of arrays is stored in xch_nr_pages member of header
note descriptor in .note.Xen note section.
The entryies are stored in pfn-ascending order.
This section must exist when the domain is non auto
translated physmap mode. Currently x86 paravirtualized domain.
The "pfn" value associated with the "gmfn" value, is in turn used
as an index into an array of actual pages in the dumpfile, which is
found at the "xch_pages_offset" at 45056 (0xb000).
The start of the index array is found in the dumpfile at the "xch_index_offset"
at 2137305088 (0x7f64b000), and ends at the "elf_strtab_offset" at 2145653760
(0x7fe41400). Accordingly, if you subtract 2137305088 from 2145653760,
the array of xen_dumpcore_p2m structures is 8348672 bytes, which when
divided by the size of the data structure (16), it equals the value of
"xch_nr_pages", or 521792.
Anyway, the very first read attempt requires the crash utility to do a
one-time-only recreation of the kernel's "p2m_top" array (pvops kernels only),
and in so doing needs to first read the page found in the hypervisor's cr3
register, which contains a machine address:
<readmem: ffffffff81614800, KVADDR, "kernel_config_data", 32768, (ROE), 2fed090>
addr: ffffffff81614800 paddr: 1614800 cnt: 2048
GETBUF(248 -> 0)
FREEBUF(0)
MEMBER_OFFSET(vcpu_guest_context, ctrlreg): 4984
ctrlreg[0]: 80050033
ctrlreg[1]: d5742000
ctrlreg[2]: 0
ctrlreg[3]: d5743000
ctrlreg[4]: 2660
ctrlreg[5]: 0
ctrlreg[6]: 0
ctrlreg[7]: 0
crash: cannot find mfn 874307 (0xd5743) in page index
crash: cannot read/find cr3 page
It contained a machine address of d5743000, which when shifted-right equates
to an PFN (or "mfn") of 874307 (0xd5743). It then walked through the index
array of xen_dumpcore_p2m structures in the dumpfile, looking for the one that
contains that "gmfn" value.
But for whatever reason, it could not find it. That being the
case, there's no way it can continue.
I can't really help much more than that. The function that
walks through the array is xc_core_mfn_to_page() in xendump.c.
It prints the "cannot find mfn ..." message, and returns back
to the x86_64_pvops_xendump_p2m_create() function in x86_64.c,
which prints the final, fatal, "cannot read/find cr3 page"
message.
If you capture the same type of debug output with the earlier
kernel, you should see it get to the point above and continue
on from there.
Dave
14 years, 3 months
Re: [Crash-utility] How to open 32 bit dom0 kdump....
by Dave Anderson
----- "Feng LI" <funglee(a)gmail.com> wrote:
> Sorry about the confusion, I don't have any domU guest running...
>
> My grub menu.lst is,
> kernel /boot/xen.gz watchdog=0 dom0_mem=1024M
> lowmem_emergency_pool=16M crashkernel=128M@32M nmi=dom0
> module /boot/vmlinuz-2.6.27.42-0.1.1.xs5.6.0.44.111158xen ro
> ramdisk_size=250000 nmi_watchdog=0 xencons=off console=rcons0
> module /boot/initrd-5.6.0-31188p.img
>
> xen.gz is 64 bit one, while dom0 kernel is 32bit...
>
> I am using echo c > /proc/sysrq-trigger to generate the kernel dump.
>
> When I tried to open the kernel dump,
>
>
>
> [debugger@crash HV_crash]$ crash32 vmlinux-2.6.27
> vmcore-2010-09-02-22-30-38
>
> crash32 5.0.3
>
> ……………….
>
> This program has absolutely no warranty. Enter "help warranty" for
> details.
>
> WARNING: machine type mismatch:
> crash utility: X86
> vmcore-2010-09-02-22-30-38: X86_64
>
> crash32: vmcore-2010-09-02-22-30-38: not a supported file format
Well, for starters, the crash utility must match the vmlinux file type,
i.e., you should be using the 32-bit version.
What is the output of:
# readelf -a vmcore-010-09-02-22-30-38
Dave
>
> [debugger@crash HV_crash]$ crash64 vmlinux-2.6.27 vmcore-2010-09-02-22-30-38
>
> crash64 5.0.3
>
> ………………….
>
> WARNING: machine type mismatch:
> crash utility: X86_64
> vmlinux-2.6.27: X86
>
> crash64: vmlinux-2.6.27: not a supported file format
>
>
14 years, 3 months