Re: [Crash-utility] Kernel Crash Analysis on Android
by Shankar, AmarX
Hi Dave,
Thanks for your info regarding kexec tool.
I am unable to download kexec from below link.
http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-too...
It says HTTP 404 Page Not Found.
Could you please guide me on this?
Thanks & Regards,
Amar Shankar
> On Wed, Mar 21, 2012 at 06:00:00PM +0000, Shankar, AmarX wrote:
>
> > I want to do kernel crash Analysis on Android Merrifield Target.
> >
> > Could someone please help me how to do it?
>
> Merrifield is pretty much similar than Medfield, e.g it has x86 core. So I
> guess you can follow the instructions how to setup kdump on x86 (see
> Documentation/kdump/kdump.txt) unless you already have that configured.
>
> crash should support this directly presuming you have vmlinux/vmcore files to
> feed it. You can configure crash to support x86 on x86_64 host by running:
>
> % make target=X86
> & make
>
> (or something along those lines).
Right -- just the first make command will suffice, i.e., when running
on an x86_64 host:
$ wget http://people.redhat.com/anderson/crash-6.0.4.tar.gz
$ tar xzf crash-6.0.4.tar.gz
...
$ cd crash-6.0.4
$ make target=X86
...
$ ./crash <path-to>/vmlinux <path-to>/vmcore
Dave
From: Shankar, AmarX
Sent: Wednesday, March 21, 2012 11:30 PM
To: 'crash-utility(a)redhat.com'
Subject: Kernel Crash Analysis on Android
Hi,
I want to do kernel crash Analysis on Android Merrifield Target.
Could someone please help me how to do it?
Thanks & Regards,
Amar Shankar
1 year
[PATCH] kmem, snap: iomem/ioport display and vmcore snapshot support
by HATAYAMA Daisuke
Some days ago I was in a situation that I had to convert vmcore in
kvmdump format into ELF since some extension module we have locally
can be used only on relatively old crash utility, around version 4,
but such old crash utility cannot handle kvmdump format.
To do the conversion in handy, I used snap command with some modifications
so that it tries to use iomem information in vmcore instead of host's
/proc/iomem. This patch is its cleaned-up version.
In this development, I naturally got down to also making an interface
for an access to resource objects, and so together with the snap
command's patch, I also extended kmem command for iomem/ioport
support. Actually:
kmem -r displays /proc/iomem
crash> kmem -r
00000000-0000ffff : reserved
00010000-0009dbff : System RAM
0009dc00-0009ffff : reserved
000c0000-000c7fff : Video ROM
...
and kmem -R displays /proc/ioport
crash> kmem -R
0000-001f : dma1
0020-0021 : pic1
0040-0043 : timer0
0050-0053 : timer1
...
Looking into old version of kernel source code back, resource structure
has been unchanged since linux-2.4.0. I borrowed the way of walking on
resouce tree in this patch from the lastest v3.3-rc series, but I
guess the logic is also applicable to old kernels. I expect Dave's
regression testsuite.
Also, there would be another command more sutable for iomem/ioport.
If necessay, I'll repost the patch.
---
HATAYAMA Daisuke (4):
Add vmcore snapshot support
Add kmem -r and -R options
Add dump iomem/ioport functions; a helper for resource objects
Add a helper function for iterating resource objects
defs.h | 9 ++++
extensions/snap.c | 54 ++++++++++++++++++++++-
help.c | 2 +
memory.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++++--
4 files changed, 180 insertions(+), 7 deletions(-)
--
Thanks.
HATAYAMA Daisuke
1 year
Re: [Crash-utility] question about phys_base
by Dave Anderson
----- Original Message -----
> >
> > OK, so then I don't understand what you mean by "may be the same"?
> >
> > You didn't answer my original question, but if I understand you correctly,
> > it would be impossible for the qemu host to create a PT_LOAD segment that
> > describes an x86_64 guest's __START_KERNEL_map region, because the host
> > doesn't know that what kind of kernel the guest is running.
>
> Yes. Even if the guest is linux, it is still impossible to do it. Because
> the guest maybe in the second kernel.
>
> qemu-dump walks all guest's page table and collect virtual address and
> physical address mapping. If the page is not used by guest, the virtual is set
> to 0. I create PT_LOAD according to such mapping. So if the guest is linux,
> there may be a PT_LOAD segment that describes __START_KERNEL_map region.
> But the information stored in PT_LOAD maybe for the second kernel. If crash
> uses it, crash will see the second kernel, not the first kernel.
Just to be clear -- what do you mean by the "second" kernel? Do you
mean that a guest kernel crashed guest, and did a kdump operation,
and that second kdump kernel failed somehow, and now you're trying
to do a "virsh dump" on the kdump kernel?
Dave
1 year
question about phys_base
by Wen Congyang
Hi, Dave
I am implementing a new dump command in the qemu. The vmcore's
format is elf(like kdump). And I try to provide phys_base in
the PT_LOAD. But if the os uses the first vcpu do kdump, the
value of phys_base is wrong.
I find a function x86_64_virt_phys_base() in crash's code.
Is it OK to call this function first? If the function
successes, we do not calculate phys_base according to PT_LOAD.
Thanks
Wen Congyang
1 year
[PATCH] runq: search current task's runqueue explicitly
by HATAYAMA Daisuke
Currently, runq sub-command doesn't consider CFS runqueue's current
task removed from CFS runqueue. Due to this, the remaining CFS
runqueus that follow the current task's is not displayed. This patch
fixes this by making runq sub-command search current task's runqueue
explicitly.
Note that CFS runqueue exists for each task group, and so does CFS
runqueue's current task, and the above search needs to be done
recursively.
Test
====
On vmcore I made 7 task groups:
root group --- A --- AA --- AAA
+ +- AAB
|
+- AB --- ABA
+- ABB
and then I ran three CPU bound tasks, which is exactly the same as
int main(void) { for (;;) continue; return 0; }
for each task group, including root group; so total 24 tasks. For
readability, I annotated each task name with its belonging group name.
For example, loop.ABA belongs to task group ABA.
Look at CPU0 collumn below. [before] lacks 8 tasks and [after]
successfully shows all tasks on the runqueue, which is identical to
the result of [sched debug] that is expected to ouput correct result.
I'll send this vmcore later.
[before]
crash> runq | cat
CPU 0 RUNQUEUE: ffff88000a215f80
CURRENT: PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
RT PRIO_ARRAY: ffff88000a216098
[no tasks queued]
CFS RB_ROOT: ffff88000a216010
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
<cut>
[after]
crash_fix> runq
CPU 0 RUNQUEUE: ffff88000a215f80
CURRENT: PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
RT PRIO_ARRAY: ffff88000a216098
[no tasks queued]
CFS RB_ROOT: ffff88000a216010
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
[120] PID: 28271 TASK: ffff8800787a8b40 COMMAND: "loop.ABB"
[120] PID: 28272 TASK: ffff880037afd580 COMMAND: "loop.ABB"
[120] PID: 28245 TASK: ffff8800785e8b00 COMMAND: "loop.AB"
[120] PID: 28246 TASK: ffff880078628ac0 COMMAND: "loop.AB"
[120] PID: 28241 TASK: ffff880078616b40 COMMAND: "loop.AA"
[120] PID: 28239 TASK: ffff8800785774c0 COMMAND: "loop.AA"
[120] PID: 28240 TASK: ffff880078617580 COMMAND: "loop.AA"
[120] PID: 28232 TASK: ffff880079b5d4c0 COMMAND: "loop.A"
<cut>
[sched debug]
crash> runq -d
CPU 0
[120] PID: 28232 TASK: ffff880079b5d4c0 COMMAND: "loop.A"
[120] PID: 28239 TASK: ffff8800785774c0 COMMAND: "loop.AA"
[120] PID: 28240 TASK: ffff880078617580 COMMAND: "loop.AA"
[120] PID: 28241 TASK: ffff880078616b40 COMMAND: "loop.AA"
[120] PID: 28245 TASK: ffff8800785e8b00 COMMAND: "loop.AB"
[120] PID: 28246 TASK: ffff880078628ac0 COMMAND: "loop.AB"
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
[120] PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
[120] PID: 28271 TASK: ffff8800787a8b40 COMMAND: "loop.ABB"
[120] PID: 28272 TASK: ffff880037afd580 COMMAND: "loop.ABB"
<cut>
Diff stat
=========
defs.h | 1 +
task.c | 37 +++++++++++++++++--------------------
2 files changed, 18 insertions(+), 20 deletions(-)
Thanks.
HATAYAMA, Daisuke
1 year
[RFC] makedumpfile, crash: LZO compression support
by HATAYAMA Daisuke
Hello,
This is a RFC patch set that adds LZO compression support to
makedumpfile and crash utility. LZO is as good as in size but by far
better in speed than ZLIB, leading to reducing down time during
generation of crash dump and refiltering.
How to build:
1. Get LZO library, which is provided as lzo-devel package on recent
linux distributions, and is also available on author's website:
http://www.oberhumer.com/opensource/lzo/.
2. Apply the patch set to makedumpfile v1.4.0 and crash v6.0.0.
3. Build both using make. But for crash, do the following now:
$ make CFLAGS="-llzo2"
How to use:
I've newly used -l option for lzo compression in this patch. So for
example, do as follows:
$ makedumpfile -l vmcore dumpfile
$ crash vmlinux dumpfile
Request of configure-like feature for crash utility:
I would like configure-like feature on crash utility for users to
select wheather to add LZO feature actually or not in build-time,
that is: ./configure --enable-lzo or ./configure --disable-lzo.
The reason is that support staff often downloads and installs the
latest version of crash utility on machines where lzo library is not
provided.
Looking at the source code, it looks to me that crash does some kind
of configuration processing in a local manner, around configure.c,
and I guess it's difficult to use autoconf tools directly.
Or is there another better way?
Performance Comparison:
Sample Data
Ideally, I must have measured the performance for many enough
vmcores generated from machines that was actually running, but now
I don't have enough sample vmcores, I couldn't do so. So this
comparison doesn't answer question on I/O time improvement. This
is TODO for now.
Instead, I choosed worst and best cases regarding compression
ratio and speed only. Specifically, the former is /dev/urandom and
the latter is /dev/zero.
I get the sample data of 10MB, 100MB and 1GB by doing like this:
$ dd bs=4096 count=$((1024*1024*1024/4096)) if=/dev/urandom of=urandom.1GB
How to measure
Then I performed compression for each block, 4096 bytes, and
measured total compression time and output size. See attached
mycompress.c.
Result
See attached file result.txt.
Discussion
For both kinds of data, lzo's compression was considerably quicker
than zlib's. Compression ratio is about 37% for urandom data, and
about 8.5% for zero data. Actual situation of physical memory
would be in between the two cases, and so I guess average
compression time ratio is between 37% and 8.5%.
Although beyond the topic of this patch set, we can estimate worst
compression time on more data size since compression is performed
block size wise and the compression time increases
linearly. Estimated worst time on 2TB memory is about 15 hours for
lzo and about 40 hours for zlib. In this case, compressed data
size is larger than the original, so they are really not used,
compression time is fully meaningless. I think compression must be
done in parallel, and I'll post such patch later.
Diffstat
* makedumpfile
diskdump_mod.h | 3 +-
makedumpfile.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++------
makedumpfile.h | 12 +++++++
3 files changed, 101 insertions(+), 12 deletions(-)
* crash
defs.h | 1 +
diskdump.c | 20 +++++++++++++++++++-
diskdump.h | 3 ++-
3 files changed, 22 insertions(+), 2 deletions(-)
TODO
* evaluation including I/O time using actual vmcores
Thanks.
HATAYAMA, Daisuke
1 year
Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
by tachibana@mxm.nes.nec.co.jp
Hi Hatayama-san,
On 2011/06/29 12:12:18 +0900, HATAYAMA Daisuke <d.hatayama(a)jp.fujitsu.com> wrote:
> From: Dave Anderson <anderson(a)redhat.com>
> Subject: Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
> Date: Tue, 28 Jun 2011 08:57:42 -0400 (EDT)
>
> >
> >
> > ----- Original Message -----
> >> Fujitsu has stand-alone dump mechanism based on firmware level
> >> functionality, which we call SADUMP, in short.
> >>
> >> We've maintained utility tools internally but now we're thinking that
> >> the best is crash utility and makedumpfile supports the sadump format
> >> for the viewpoint of both portability and maintainability.
> >>
> >> We'll be of course responsible for its maintainance in a continuous
> >> manner. The sadump dump format is very similar to diskdump format and
> >> so kdump (compressed) format, so we estimate patch set would be a
> >> relatively small size.
> >>
> >> Could you tell me whether crash utility and makedumpfile can support
> >> the sadump format? If OK, we'll start to make patchset.
I think it's not bad to support sadump by makedumpfile. However I have
several questions.
- Do you want to use makedumpfile to make an existing file that sadump has
dumped small?
- It isn't possible to support the same form as kdump-compressed format
now, is it?
- When the information that makedumpfile reads from a note of /proc/vmcore
(or a header of kdump-compressed format) is added by an extension of
makedumpfile, do you need to modify sadump?
Thanks
tachibana
> >
> > Sure, yes, the crash utility can always support another dumpfile format.
> >
>
> Thanks. It helps a lot.
>
> > It's unclear to me how similar SADUMP is to diskdump/compressed-kdump.
> > Does your internal version patch diskdump.c, or do you maintain your
> > own "sadump.c"? I ask because if your patchset is at all intrusive,
> > I'd prefer it be kept in its own file, primarily for maintainability,
> > but also because SADUMP is essentially a black-box to anybody outside
> > Fujitsu.
>
> What I meant when I used ``similar'' is both literally and
> logically. The format consists of diskdump header-like header, two
> kinds of bitmaps used for the same purpose as those in diskump format,
> and memory data. They can be handled in common with the existing data
> structure, diskdump_data, non-intrusively, so I hope they are placed
> in diskdump.c.
>
> On the other hand, there's a code to be placed at such specific
> area. sadump is triggered depending on kdump's progress and so
> register values to be contained in vmcore varies according to the
> progress: If crash_notes has been initialized when sadump is
> triggered, sadump packs the register values in crash_notes; if not
> yet, packs registers gathered by firmware. This is sadump specific
> processing, so I think putting it in specific sadump.c file is a
> natural and reasonable choise.
>
> Anyway, I have not made any patch set for this. I'll post a patch set
> when I complete.
>
> Again, thanks a lot for the positive answer.
>
> Thanks.
> HATAYAMA, Daisuke
>
>
> _______________________________________________
> kexec mailing list
> kexec(a)lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
1 year
[PATCH v2] Fix a segfault by "net" command
by Kazuhito Hagio
Hi Dave,
I updated my patch to fix a lack of the function's return value.
--
When a network device has a lot of IP addresses, the "net" command
can generate a segmentation fault due to a buffer overflow without
this patch. I have seen several vmcores like that in customer support.
# for i in {1..250} ; do ifconfig eth1:$i 192.168.1.$i ; done
# crash
...
crash> net
NET_DEVICE NAME IP ADDRESS(ES)
ffff88007faab000 lo 127.0.0.1
ffff88003e097000 eth0 192.168.122.182
ffff88003e12b000 eth1 192.168.1.1, 192.168.1.2, ..., 192.168.1.250
ffff88003e12e000 eth2
Segmentation fault (core dumped)
Changes in v2:
* Fix a lack of return value of get_device_address()
Signed-off-by: Kazuhito Hagio <k-hagio(a)ab.jp.nec.com>
---
net.c | 50 ++++++++++++++++++++++++++++++++++++--------------
1 file changed, 36 insertions(+), 14 deletions(-)
diff --git a/net.c b/net.c
index bb86963..4199091 100644
--- a/net.c
+++ b/net.c
@@ -70,7 +70,7 @@ static void show_net_devices_v3(ulong);
static void print_neighbour_q(ulong, int);
static void get_netdev_info(ulong, struct devinfo *);
static void get_device_name(ulong, char *);
-static void get_device_address(ulong, char *);
+static long get_device_address(ulong, char **, long);
static void get_sock_info(ulong, char *);
static void dump_arp(void);
static void arp_state_to_flags(unsigned char);
@@ -441,7 +441,8 @@ show_net_devices(ulong task)
{
ulong next;
long flen;
- char buf[BUFSIZE];
+ char *buf;
+ long buflen = BUFSIZE;
if (symbol_exists("dev_base_head")) {
show_net_devices_v2(task);
@@ -459,6 +460,7 @@ show_net_devices(ulong task)
if (!net->netdevice || !next)
return;
+ buf = GETBUF(buflen);
flen = MAX(VADDR_PRLEN, strlen(net->netdevice));
fprintf(fp, "%s NAME IP ADDRESS(ES)\n",
@@ -472,12 +474,14 @@ show_net_devices(ulong task)
get_device_name(next, buf);
fprintf(fp, "%-6s ", buf);
- get_device_address(next, buf);
+ buflen = get_device_address(next, &buf, buflen);
fprintf(fp, "%s\n", buf);
readmem(next+net->dev_next, KVADDR, &next,
sizeof(void *), "(net_)device.next", FAULT_ON_ERROR);
} while (next);
+
+ FREEBUF(buf);
}
static void
@@ -485,13 +489,15 @@ show_net_devices_v2(ulong task)
{
struct list_data list_data, *ld;
char *net_device_buf;
- char buf[BUFSIZE];
+ char *buf;
+ long buflen = BUFSIZE;
int ndevcnt, i;
long flen;
if (!net->netdevice) /* initialized in net_init() */
return;
+ buf = GETBUF(buflen);
flen = MAX(VADDR_PRLEN, strlen(net->netdevice));
fprintf(fp, "%s NAME IP ADDRESS(ES)\n",
@@ -521,12 +527,13 @@ show_net_devices_v2(ulong task)
get_device_name(ld->list_ptr[i], buf);
fprintf(fp, "%-6s ", buf);
- get_device_address(ld->list_ptr[i], buf);
+ buflen = get_device_address(ld->list_ptr[i], &buf, buflen);
fprintf(fp, "%s\n", buf);
}
FREEBUF(ld->list_ptr);
FREEBUF(net_device_buf);
+ FREEBUF(buf);
}
static void
@@ -535,13 +542,15 @@ show_net_devices_v3(ulong task)
ulong nsproxy_p, net_ns_p;
struct list_data list_data, *ld;
char *net_device_buf;
- char buf[BUFSIZE];
+ char *buf;
+ long buflen = BUFSIZE;
int ndevcnt, i;
long flen;
if (!net->netdevice) /* initialized in net_init() */
return;
+ buf = GETBUF(buflen);
flen = MAX(VADDR_PRLEN, strlen(net->netdevice));
fprintf(fp, "%s NAME IP ADDRESS(ES)\n",
@@ -581,12 +590,13 @@ show_net_devices_v3(ulong task)
get_device_name(ld->list_ptr[i], buf);
fprintf(fp, "%-6s ", buf);
- get_device_address(ld->list_ptr[i], buf);
+ buflen = get_device_address(ld->list_ptr[i], &buf, buflen);
fprintf(fp, "%s\n", buf);
}
FREEBUF(ld->list_ptr);
FREEBUF(net_device_buf);
+ FREEBUF(buf);
}
/*
@@ -869,19 +879,24 @@ get_device_name(ulong devaddr, char *buf)
* in_ifaddr->ifa_next points to the next in_ifaddr in the list (if any).
*
*/
-static void
-get_device_address(ulong devaddr, char *buf)
+static long
+get_device_address(ulong devaddr, char **bufp, long buflen)
{
ulong ip_ptr, ifa_list;
struct in_addr ifa_address;
+ char *buf;
+ char buf2[BUFSIZE];
+ long pos = 0;
- BZERO(buf, BUFSIZE);
+ buf = *bufp;
+ BZERO(buf, buflen);
+ BZERO(buf2, BUFSIZE);
readmem(devaddr + net->dev_ip_ptr, KVADDR,
&ip_ptr, sizeof(ulong), "ip_ptr", FAULT_ON_ERROR);
if (!ip_ptr)
- return;
+ return buflen;
readmem(ip_ptr + OFFSET(in_device_ifa_list), KVADDR,
&ifa_list, sizeof(ulong), "ifa_list", FAULT_ON_ERROR);
@@ -891,13 +906,20 @@ get_device_address(ulong devaddr, char *buf)
&ifa_address, sizeof(struct in_addr), "ifa_address",
FAULT_ON_ERROR);
- sprintf(&buf[strlen(buf)], "%s%s",
- strlen(buf) ? ", " : "",
- inet_ntoa(ifa_address));
+ sprintf(buf2, "%s%s", pos ? ", " : "", inet_ntoa(ifa_address));
+ if (pos + strlen(buf2) >= buflen) {
+ RESIZEBUF(*bufp, buflen, buflen * 2);
+ buf = *bufp;
+ BZERO(buf + buflen, buflen);
+ buflen *= 2;
+ }
+ BCOPY(buf2, &buf[pos], strlen(buf2));
+ pos += strlen(buf2);
readmem(ifa_list + OFFSET(in_ifaddr_ifa_next), KVADDR,
&ifa_list, sizeof(ulong), "ifa_next", FAULT_ON_ERROR);
}
+ return buflen;
}
/*
--
1.8.3.1
6 years, 12 months
Re: [Crash-utility] crash: page excluded: kernel virtual address: ffffffff81e3da50 type: "page_offset_base"
by Cao jin
Hi Dave,
Because I didn't subscribe the list, so please CC me when reply:) My
colleague help to forward your reply to me, so I decide to reply
directly in order to keep this mail in the thread other than new a thread.
On 11/23/2017 02:04 PM, Fei, Jie/费 杰 wrote:
>
>
>
> -------- Forwarded Message --------
> Subject: Re: [Crash-utility] crash: page excluded: kernel virtual
> address: ffffffff81e3da50 type: "page_offset_base"
> Date: Wed, 22 Nov 2017 09:35:40 -0500
> From: Dave Anderson <anderson(a)redhat.com>
> Reply-To: Discussion list for crash utility usage, maintenance and
> development <crash-utility(a)redhat.com>
> To: Discussion list for crash utility usage, maintenance and
> development <crash-utility(a)redhat.com>
>
>
>
> ----- Original Message -----
>> Hi,
>>
>> I am using the latest crash tool & kernel 4.14 compiled from source, and
>> I got the following error message. As I searched, this is fixed in crash
>> 7.2.0, but I still have it here. So, is anyone has a clue?
>
> It's always going to be a crap-shoot with very recent upstream kernels,
> but you haven't given enough information to determine what the issue is.
>
Sorry, I am to new dump/crash area.
> If the vmcore was created by "virsh dump", and the kernel has KASLR enabled,
> then it's just not supported at this time. Otherwise, perhaps the output of
> "crash -d8" may yield some clues.
>
I am using kdump to create the vmcore. Actually, I did solve this issue
by add "nokaslr" to kernel parameter when I used the built-in crash of
Fedora. But when turn to the compiled version of my own, that solution
don't work. here is the output of `crash -d8`:
$ sudo ./crash -d8 /var/crash/127.0.0.1-2017-11-22-18\:21\:42/vmcore
../linux/vmlinux
crash 7.2.0++
Copyright (C) 2002-2017 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005, 2011 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.
compressed kdump: header->utsname.machine: x86_64
compressed kdump: memory bitmap offset: 2000
diskdump_data:
filename: /var/crash/127.0.0.1-2017-11-22-18:21:42/vmcore
flags: 6 (KDUMP_CMPRS_LOCAL|ERROR_EXCLUDED)
dfd: 3
ofp: 0
machine_type: 62 (EM_X86_64)
header: 1013930
signature: "KDUMP "
header_version: 6
utsname:
sysname: Linux
nodename: IAAS1
release: 4.14.0
version: #1 SMP Wed Nov 15 10:32:46 CST 2017
machine: x86_64
domainname: (none)
timestamp:
tv_sec: 5a154fac
tv_usec: 0
status: 2 (DUMP_DH_COMPRESSED_LZO)
block_size: 4096
sub_hdr_size: 1
bitmap_blocks: 80
max_mapnr: 1310208
total_ram_blocks: 0
device_blocks: 0
written_blocks: 0
current_cpu: 0
nr_cpus: 4
tasks[nr_cpus]: 0
0
0
0
sub_header: 0 (n/a)
sub_header_kdump: 1014940
phys_base: 0
dump_level: 31 (0x1f)
(DUMP_EXCLUDE_ZERO|DUMP_EXCLUDE_CACHE|DUMP_EXCLUDE_CACHE_PRI|DUMP_EXCLUDE_USER_DATA|DUMP_EXCLUDE_FREE)
split: 0
start_pfn: (unused)
end_pfn: (unused)
offset_vmcoreinfo: 5648 (0x1610)
size_vmcoreinfo: 1883 (0x75b)
OSRELEASE=4.14.0
PAGESIZE=4096
SYMBOL(init_uts_ns)=ffffffff81e10280
SYMBOL(node_online_map)=ffffffff82030e80
SYMBOL(swapper_pg_dir)=ffffffff81e09000
SYMBOL(_stext)=ffffffff81000000
SYMBOL(vmap_area_list)=ffffffff81efc470
SYMBOL(mem_section)=ffffffff82401dc0
LENGTH(mem_section)=2048
SIZE(mem_section)=16
OFFSET(mem_section.section_mem_map)=0
SIZE(page)=64
SIZE(pglist_data)=172864
SIZE(zone)=1664
SIZE(free_area)=104
SIZE(list_head)=16
SIZE(nodemask_t)=128
OFFSET(page.flags)=0
OFFSET(page._refcount)=28
OFFSET(page.mapping)=8
OFFSET(page.lru)=32
OFFSET(page._mapcount)=24
OFFSET(page.private)=48
OFFSET(page.compound_dtor)=40
OFFSET(page.compound_order)=44
OFFSET(page.compound_head)=32
OFFSET(pglist_data.node_zones)=0
OFFSET(pglist_data.nr_zones)=172192
OFFSET(pglist_data.node_start_pfn)=172200
OFFSET(pglist_data.node_spanned_pages)=172216
OFFSET(pglist_data.node_id)=172224
OFFSET(zone.free_area)=192
OFFSET(zone.vm_stat)=1472
OFFSET(zone.spanned_pages)=112
OFFSET(free_area.free_list)=0
OFFSET(list_head.next)=0
OFFSET(list_head.prev)=8
OFFSET(vmap_area.va_start)=0
OFFSET(vmap_area.list)=48
LENGTH(zone.free_area)=11
SYMBOL(log_buf)=ffffffff81e58480
SYMBOL(log_buf_len)=ffffffff81e5847c
SYMBOL(log_first_idx)=ffffffff823340f8
SYMBOL(clear_idx)=ffffffff823340cc
SYMBOL(log_next_idx)=ffffffff823340e8
SIZE(printk_log)=16
OFFSET(printk_log.ts_nsec)=0
OFFSET(printk_log.len)=8
OFFSET(printk_log.text_len)=10
OFFSET(printk_log.dict_len)=12
LENGTH(free_area.free_list)=6
NUMBER(NR_FREE_PAGES)=0
NUMBER(PG_lru)=5
NUMBER(PG_private)=12
NUMBER(PG_swapcache)=9
NUMBER(PG_slab)=8
NUMBER(PG_hwpoison)=22
NUMBER(PG_head_mask)=32768
NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-128
NUMBER(HUGETLB_PAGE_DTOR)=2
NUMBER(phys_base)=0
SYMBOL(init_top_pgt)=ffffffff81e09000
SYMBOL(node_data)=ffffffff8202c6c0
LENGTH(node_data)=1024
KERNELOFFSET=0
NUMBER(KERNEL_IMAGE_SIZE)=1073741824
CRASHTIME=1511346092
offset_note: 4200 (0x1068)
size_note: 3332 (0xd04)
notes_buf: 1015950
num_prstatus_notes: 4
notes[0]: 1015950 (NT_PRSTATUS)
si.signo: 0 si.code: 0 si.errno: 0
cursig: 0 sigpend: 0 sighold: 0
pid: 114 ppid: 0 pgrp: 0 sid:0
utime: 0.000000 stime: 0.000000
cutime: 0.000000 cstime: 0.000000
ORIG_RAX: ffffffffffffffff fpvalid: 0
R15: ffffffff82445b80 R14: ffffffff822f3422
R13: 0000000000000020 R12: 00000000000026f5
RBP: ffffc90000adbce0 RBX: ffffffff82445b80
R11: ffffffff822f342d R10: 0000000000000000
R9: 000000000000000f R8: 0000000000000000
RAX: 000000799065724b RCX: 0000007990656c1b
RDX: 0000000000000000 RSI: 0000000000000000
RDI: 0000000000000cdd RIP: ffffffff8188600f
RFLAGS: 0000000000000097 RSP: ffffc90000adbce0
FS_BASE: 0000000000000000
GS_BASE: 0000000000000000
CS: 0010 SS: 0018 DS: 0000
ES: 0000 FS: 0000 GS: 0000
notes[1]: 1015ab4 (NT_PRSTATUS)
si.signo: 0 si.code: 0 si.errno: 0
cursig: 0 sigpend: 0 sighold: 0
pid: 2030 ppid: 0 pgrp: 0 sid:0
utime: 0.000000 stime: 0.000000
cutime: 0.000000 cstime: 0.000000
ORIG_RAX: ffffffffffffffff fpvalid: 0
R15: ffff880131999500 R14: ffffffff81f76fa0
R13: 0000000000000000 R12: 0000000000000007
RBP: ffffc90001dabdd8 RBX: 0000000000000063
R11: ffffffff822f342d R10: 0000000000000001
R9: 0000000000000007 R8: 00000000000002d9
RAX: 000000000000000f RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff88013fa8e138
RDI: 0000000000000063 RIP: ffffffff81532ad6
RFLAGS: 0000000000010282 RSP: ffffc90001dabdd8
FS_BASE: 00007fe58251fb80
GS_BASE: 0000000000000000
CS: 0010 SS: 0018 DS: 0000
ES: 0000 FS: 0000 GS: 0000
notes[2]: 1015c18 (NT_PRSTATUS)
si.signo: 0 si.code: 0 si.errno: 0
cursig: 0 sigpend: 0 sighold: 0
pid: 0 ppid: 0 pgrp: 0 sid:0
utime: 0.000000 stime: 0.000000
cutime: 0.000000 cstime: 0.000000
ORIG_RAX: ffffffffffffffff fpvalid: 0
R15: ffffffff81f65618 R14: 0000000000000020
R13: 0000000000000004 R12: 0000000000000003
RBP: ffffc900006a3e60 RBX: 0000000000000008
R11: 000000000000298b R10: ffffc900006a3e40
R9: 0000000000000018 R8: 00000000000054a2
RAX: 0000000000000020 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff81f65480
RDI: 0000000000000002 RIP: ffffffff81895ddc
RFLAGS: 0000000000000046 RSP: ffffc900006a3e40
FS_BASE: 0000000000000000
GS_BASE: 0000000000000000
CS: 0010 SS: 0018 DS: 0000
ES: 0000 FS: 0000 GS: 0000
notes[3]: 1015d7c (NT_PRSTATUS)
si.signo: 0 si.code: 0 si.errno: 0
cursig: 0 sigpend: 0 sighold: 0
pid: 0 ppid: 0 pgrp: 0 sid:0
utime: 0.000000 stime: 0.000000
cutime: 0.000000 cstime: 0.000000
ORIG_RAX: ffffffffffffffff fpvalid: 0
R15: ffffffff81f65618 R14: 0000000000000020
R13: 0000000000000004 R12: 0000000000000003
RBP: ffffc900006abe60 RBX: 0000000000000008
R11: 000000000000037e R10: ffffc900006abe40
R9: 0000000000000008 R8: 00000000ffffffff
RAX: 0000000000000020 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff81f65480
RDI: 0000000000000003 RIP: ffffffff81895ddc
RFLAGS: 0000000000000046 RSP: ffffc900006abe40
FS_BASE: 0000000000000000
GS_BASE: 0000000000000000
CS: 0010 SS: 0018 DS: 0000
ES: 0000 FS: 0000 GS: 0000
snapshot_task: 0
num_qemu_notes: 0
NOTE offsets: 1068 (NT_PRSTATUS)
11cc (NT_PRSTATUS)
1330 (NT_PRSTATUS)
1494 (NT_PRSTATUS)
offset_eraseinfo: 0 (0x0)
size_eraseinfo: 0 (0x0)
start_pfn_64: (unused)
end_pfn_64: (unused)
max_mapnr_64: 1310208 (0x13fe00)
data_offset: 52000
block_size: 4096
block_shift: 12
bitmap: 7f8ff466c010
bitmap_len: 327680
max_mapnr: 1310208 (0x13fe00)
dumpable_bitmap: 7f8ff461b010
byte: 0
bit: 0
compressed_page: 10470a0
curbufptr: 0
page_cache_hdr[0]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1037090
pg_hit_count: 0
page_cache_hdr[1]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1038090
pg_hit_count: 0
page_cache_hdr[2]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1039090
pg_hit_count: 0
page_cache_hdr[3]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103a090
pg_hit_count: 0
page_cache_hdr[4]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103b090
pg_hit_count: 0
page_cache_hdr[5]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103c090
pg_hit_count: 0
page_cache_hdr[6]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103d090
pg_hit_count: 0
page_cache_hdr[7]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103e090
pg_hit_count: 0
page_cache_hdr[8]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 103f090
pg_hit_count: 0
page_cache_hdr[9]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1040090
pg_hit_count: 0
page_cache_hdr[10]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1041090
pg_hit_count: 0
page_cache_hdr[11]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1042090
pg_hit_count: 0
page_cache_hdr[12]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1043090
pg_hit_count: 0
page_cache_hdr[13]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1044090
pg_hit_count: 0
page_cache_hdr[14]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1045090
pg_hit_count: 0
page_cache_hdr[15]:
pg_flags: 0 ()
pg_addr: 0
pg_bufptr: 1046090
pg_hit_count: 0
page_cache_buf: 1037090
evict_index: 0
evictions: 0
accesses: 0
cached_reads: 0
valid_pages: 1036680
readmem: read_diskdump()
crash: pv_init_ops exists: ARCH_PVOPS
VMCOREINFO: NUMBER(phys_base): 0 -> 0
gdb ../linux/vmlinux
GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...
GETBUF(288 -> 0)
GETBUF(1500 -> 1)
FREEBUF(1)
FREEBUF(0)
<readmem: ffffffff81e3da50, KVADDR, "page_offset_base", 8, (FOE), d83568>
<read_diskdump: addr: ffffffff81e3da50 paddr: 1e3da50 cnt: 8>
read_diskdump: PAGE_EXCLUDED: paddr/pfn: 1e3da50/1e3d
crash: page excluded: kernel virtual address: ffffffff81e3da50 type:
"page_offset_base"
--
Sincerely,
Cao jin
> Dave
>
>
>
>>
>> [root@IAAS1 crash]# ./crash
>> /var/crash/127.0.0.1-2017-11-22-11\:57\:51/vmcore ../linux/vmlinux
>>
>> crash 7.2.0++
>> Copyright (C) 2002-2017 Red Hat, Inc.
>> Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
>> Copyright (C) 1999-2006 Hewlett-Packard Co
>> Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited
>> Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
>> Copyright (C) 2005, 2011 NEC Corporation
>> Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
>> Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
>> This program is free software, covered by the GNU General Public License,
>> and you are welcome to change it and/or distribute copies of it under
>> certain conditions. Enter "help copying" to see the conditions.
>> This program has absolutely no warranty. Enter "help warranty" for details.
>>
>> GNU gdb (GDB) 7.6
>> Copyright (C) 2013 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later
>> <http://gnu.org/licenses/gpl.html>
>> This is free software: you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law. Type "show copying"
>> and "show warranty" for details.
>> This GDB was configured as "x86_64-unknown-linux-gnu"...
>>
>> crash: page excluded: kernel virtual address: ffffffff81e3da50 type:
>> "page_offset_base"
>>
>> --
>> Sincerely,
>> Cao jin
>>
>>
>> --
>> Crash-utility mailing list
>> Crash-utility(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/crash-utility
>>
>
> --
> Crash-utility mailing list
> Crash-utility(a)redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
>
>
7 years