Re: [Crash-utility] Kernel Crash Analysis on Android
                                
                                
                                
                                    
                                        by Shankar, AmarX
                                    
                                
                                
                                        Hi Dave,
Thanks for your info regarding kexec tool.
I am unable to download kexec from below link.
http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-too...
It says HTTP 404 Page Not Found.
Could you please guide me on this?
Thanks & Regards,
Amar Shankar
> On Wed, Mar 21, 2012 at 06:00:00PM +0000, Shankar, AmarX wrote:
>
> > I want to do kernel crash Analysis on Android Merrifield Target.
> >
> > Could someone please help me how to do it?
>
> Merrifield is pretty much similar than Medfield, e.g it has x86 core. So I
> guess you can follow the instructions how to setup kdump on x86 (see
> Documentation/kdump/kdump.txt) unless you already have that configured.
>
> crash should support this directly presuming you have vmlinux/vmcore files to
> feed it. You can configure crash to support x86 on x86_64 host by running:
>
> % make target=X86
> & make
>
> (or something along those lines).
Right -- just the first make command will suffice, i.e., when running
on an x86_64 host:
$ wget http://people.redhat.com/anderson/crash-6.0.4.tar.gz
$ tar xzf crash-6.0.4.tar.gz
...
$ cd crash-6.0.4
$ make target=X86
...
$ ./crash <path-to>/vmlinux <path-to>/vmcore
Dave
From: Shankar, AmarX
Sent: Wednesday, March 21, 2012 11:30 PM
To: 'crash-utility(a)redhat.com'
Subject: Kernel Crash Analysis on Android
Hi,
I want to do kernel crash Analysis on Android Merrifield Target.
Could someone please help me how to do it?
Thanks & Regards,
Amar Shankar
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [PATCH] kmem, snap: iomem/ioport display and vmcore snapshot support
                                
                                
                                
                                    
                                        by HATAYAMA Daisuke
                                    
                                
                                
                                        Some days ago I was in a situation that I had to convert vmcore in
kvmdump format into ELF since some extension module we have locally
can be used only on relatively old crash utility, around version 4,
but such old crash utility cannot handle kvmdump format.
To do the conversion in handy, I used snap command with some modifications
so that it tries to use iomem information in vmcore instead of host's
/proc/iomem. This patch is its cleaned-up version.
In this development, I naturally got down to also making an interface
for an access to resource objects, and so together with the snap
command's patch, I also extended kmem command for iomem/ioport
support. Actually:
kmem -r displays /proc/iomem
crash> kmem -r
00000000-0000ffff : reserved
00010000-0009dbff : System RAM
0009dc00-0009ffff : reserved
000c0000-000c7fff : Video ROM
...
and kmem -R displays /proc/ioport
crash> kmem -R
0000-001f : dma1
0020-0021 : pic1
0040-0043 : timer0
0050-0053 : timer1
...
Looking into old version of kernel source code back, resource structure
has been unchanged since linux-2.4.0. I borrowed the way of walking on
resouce tree in this patch from the lastest v3.3-rc series, but I
guess the logic is also applicable to old kernels. I expect Dave's
regression testsuite.
Also, there would be another command more sutable for iomem/ioport.
If necessay, I'll repost the patch.
---
HATAYAMA Daisuke (4):
      Add vmcore snapshot support
      Add kmem -r and -R options
      Add dump iomem/ioport functions; a helper for resource objects
      Add a helper function for iterating resource objects
 defs.h            |    9 ++++
 extensions/snap.c |   54 ++++++++++++++++++++++-
 help.c            |    2 +
 memory.c          |  122 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 180 insertions(+), 7 deletions(-)
--
Thanks.
HATAYAMA Daisuke
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Re: [Crash-utility] question about phys_base
                                
                                
                                
                                    
                                        by Dave Anderson
                                    
                                
                                
                                        
----- Original Message -----
> >
> > OK, so then I don't understand what you mean by "may be the same"?
> >
> > You didn't answer my original question, but if I understand you  correctly,
> > it would be impossible for the qemu host to create a PT_LOAD segment that
> > describes an x86_64 guest's __START_KERNEL_map region, because the host
> > doesn't know that what kind of kernel the guest is running.
> 
> Yes. Even if the guest is linux, it is still impossible to do it.  Because
> the guest maybe in the second kernel.
> 
> qemu-dump walks all guest's page table and collect virtual address and
> physical address mapping. If the page is not used by guest, the virtual is set
> to 0.  I create PT_LOAD according to such mapping. So if the guest is linux,
> there may be a PT_LOAD segment that describes __START_KERNEL_map region.
> But the information stored in PT_LOAD maybe for the second kernel. If crash
> uses it, crash will see the second kernel, not the first kernel.
Just to be clear -- what do you mean by the "second" kernel?  Do you
mean that a guest kernel crashed guest, and did a kdump operation,
and that second kdump kernel failed somehow, and now you're trying
to do a "virsh dump" on the kdump kernel?
Dave
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
            
        
        
        
                
                        
                        
                                
                                
                                        
                                                
                                        
                                        
                                        question about phys_base
                                
                                
                                
                                    
                                        by Wen Congyang
                                    
                                
                                
                                        Hi, Dave
I am implementing a new dump command in the qemu. The vmcore's
format is elf(like kdump). And I try to provide phys_base in
the PT_LOAD. But if the os uses the first vcpu do kdump, the
value of phys_base is wrong.
I find a function x86_64_virt_phys_base() in crash's code.
Is it OK to call this function first? If the function
successes, we do not calculate phys_base according to PT_LOAD.
Thanks
Wen Congyang
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [PATCH] runq: search current task's runqueue explicitly
                                
                                
                                
                                    
                                        by HATAYAMA Daisuke
                                    
                                
                                
                                        Currently, runq sub-command doesn't consider CFS runqueue's current
task removed from CFS runqueue. Due to this, the remaining CFS
runqueus that follow the current task's is not displayed. This patch
fixes this by making runq sub-command search current task's runqueue
explicitly.
Note that CFS runqueue exists for each task group, and so does CFS
runqueue's current task, and the above search needs to be done
recursively.
Test
====
On vmcore I made 7 task groups:
  root group --- A --- AA --- AAA
                    +      +- AAB
                    |
                    +- AB --- ABA
                           +- ABB
and then I ran three CPU bound tasks, which is exactly the same as
  int main(void) { for (;;) continue; return 0; }
for each task group, including root group; so total 24 tasks. For
readability, I annotated each task name with its belonging group name.
For example, loop.ABA belongs to task group ABA.
Look at CPU0 collumn below. [before] lacks 8 tasks and [after]
successfully shows all tasks on the runqueue, which is identical to
the result of [sched debug] that is expected to ouput correct result.
I'll send this vmcore later.
[before]
crash> runq | cat
CPU 0 RUNQUEUE: ffff88000a215f80
  CURRENT: PID: 28263  TASK: ffff880037aaa040  COMMAND: "loop.ABA"
  RT PRIO_ARRAY: ffff88000a216098
     [no tasks queued]
  CFS RB_ROOT: ffff88000a216010
     [120] PID: 28262  TASK: ffff880037cc40c0  COMMAND: "loop.ABA"
<cut>
[after]
crash_fix> runq
CPU 0 RUNQUEUE: ffff88000a215f80
  CURRENT: PID: 28263  TASK: ffff880037aaa040  COMMAND: "loop.ABA"
  RT PRIO_ARRAY: ffff88000a216098
     [no tasks queued]
  CFS RB_ROOT: ffff88000a216010
     [120] PID: 28262  TASK: ffff880037cc40c0  COMMAND: "loop.ABA"
     [120] PID: 28271  TASK: ffff8800787a8b40  COMMAND: "loop.ABB"
     [120] PID: 28272  TASK: ffff880037afd580  COMMAND: "loop.ABB"
     [120] PID: 28245  TASK: ffff8800785e8b00  COMMAND: "loop.AB"
     [120] PID: 28246  TASK: ffff880078628ac0  COMMAND: "loop.AB"
     [120] PID: 28241  TASK: ffff880078616b40  COMMAND: "loop.AA"
     [120] PID: 28239  TASK: ffff8800785774c0  COMMAND: "loop.AA"
     [120] PID: 28240  TASK: ffff880078617580  COMMAND: "loop.AA"
     [120] PID: 28232  TASK: ffff880079b5d4c0  COMMAND: "loop.A"
<cut>
[sched debug]
crash> runq -d
CPU 0
     [120] PID: 28232  TASK: ffff880079b5d4c0  COMMAND: "loop.A"
     [120] PID: 28239  TASK: ffff8800785774c0  COMMAND: "loop.AA"
     [120] PID: 28240  TASK: ffff880078617580  COMMAND: "loop.AA"
     [120] PID: 28241  TASK: ffff880078616b40  COMMAND: "loop.AA"
     [120] PID: 28245  TASK: ffff8800785e8b00  COMMAND: "loop.AB"
     [120] PID: 28246  TASK: ffff880078628ac0  COMMAND: "loop.AB"
     [120] PID: 28262  TASK: ffff880037cc40c0  COMMAND: "loop.ABA"
     [120] PID: 28263  TASK: ffff880037aaa040  COMMAND: "loop.ABA"
     [120] PID: 28271  TASK: ffff8800787a8b40  COMMAND: "loop.ABB"
     [120] PID: 28272  TASK: ffff880037afd580  COMMAND: "loop.ABB"
<cut>
Diff stat
=========
 defs.h |    1 +
 task.c |   37 +++++++++++++++++--------------------
 2 files changed, 18 insertions(+), 20 deletions(-)
Thanks.
HATAYAMA, Daisuke
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        [RFC] makedumpfile, crash: LZO compression support
                                
                                
                                
                                    
                                        by HATAYAMA Daisuke
                                    
                                
                                
                                        Hello,
This is a RFC patch set that adds LZO compression support to
makedumpfile and crash utility. LZO is as good as in size but by far
better in speed than ZLIB, leading to reducing down time during
generation of crash dump and refiltering.
How to build:
  1. Get LZO library, which is provided as lzo-devel package on recent
  linux distributions, and is also available on author's website:
  http://www.oberhumer.com/opensource/lzo/.
  2. Apply the patch set to makedumpfile v1.4.0 and crash v6.0.0.
  3. Build both using make. But for crash, do the following now:
    $ make CFLAGS="-llzo2"
How to use:
  I've newly used -l option for lzo compression in this patch. So for
  example, do as follows:
  $ makedumpfile -l vmcore dumpfile
  $ crash vmlinux dumpfile
Request of configure-like feature for crash utility:
  I would like configure-like feature on crash utility for users to
  select wheather to add LZO feature actually or not in build-time,
  that is: ./configure --enable-lzo or ./configure --disable-lzo.
  The reason is that support staff often downloads and installs the
  latest version of crash utility on machines where lzo library is not
  provided.
  Looking at the source code, it looks to me that crash does some kind
  of configuration processing in a local manner, around configure.c,
  and I guess it's difficult to use autoconf tools directly.
  Or is there another better way?
Performance Comparison:
  Sample Data
    Ideally, I must have measured the performance for many enough
    vmcores generated from machines that was actually running, but now
    I don't have enough sample vmcores, I couldn't do so. So this
    comparison doesn't answer question on I/O time improvement. This
    is TODO for now.
    Instead, I choosed worst and best cases regarding compression
    ratio and speed only. Specifically, the former is /dev/urandom and
    the latter is /dev/zero.
    I get the sample data of 10MB, 100MB and 1GB by doing like this:
      $ dd bs=4096 count=$((1024*1024*1024/4096)) if=/dev/urandom of=urandom.1GB
  How to measure
    Then I performed compression for each block, 4096 bytes, and
    measured total compression time and output size. See attached
    mycompress.c.
  Result
    See attached file result.txt.
  Discussion
    For both kinds of data, lzo's compression was considerably quicker
    than zlib's. Compression ratio is about 37% for urandom data, and
    about 8.5% for zero data. Actual situation of physical memory
    would be in between the two cases, and so I guess average
    compression time ratio is between 37% and 8.5%.
    Although beyond the topic of this patch set, we can estimate worst
    compression time on more data size since compression is performed
    block size wise and the compression time increases
    linearly. Estimated worst time on 2TB memory is about 15 hours for
    lzo and about 40 hours for zlib. In this case, compressed data
    size is larger than the original, so they are really not used,
    compression time is fully meaningless. I think compression must be
    done in parallel, and I'll post such patch later.
Diffstat
  * makedumpfile
 diskdump_mod.h |    3 +-
 makedumpfile.c |   98 +++++++++++++++++++++++++++++++++++++++++++++++++------
 makedumpfile.h |   12 +++++++
 3 files changed, 101 insertions(+), 12 deletions(-)
  * crash
 defs.h     |    1 +
 diskdump.c |   20 +++++++++++++++++++-
 diskdump.h |    3 ++-
 3 files changed, 22 insertions(+), 2 deletions(-)
TODO
  * evaluation including I/O time using actual vmcores
Thanks.
HATAYAMA, Daisuke
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
                                
                                
                                
                                    
                                        by tachibana@mxm.nes.nec.co.jp
                                    
                                
                                
                                        Hi Hatayama-san,
On 2011/06/29 12:12:18 +0900, HATAYAMA Daisuke <d.hatayama(a)jp.fujitsu.com> wrote:
> From: Dave Anderson <anderson(a)redhat.com>
> Subject: Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
> Date: Tue, 28 Jun 2011 08:57:42 -0400 (EDT)
> 
> > 
> > 
> > ----- Original Message -----
> >> Fujitsu has stand-alone dump mechanism based on firmware level
> >> functionality, which we call SADUMP, in short.
> >> 
> >> We've maintained utility tools internally but now we're thinking that
> >> the best is crash utility and makedumpfile supports the sadump format
> >> for the viewpoint of both portability and maintainability.
> >> 
> >> We'll be of course responsible for its maintainance in a continuous
> >> manner. The sadump dump format is very similar to diskdump format and
> >> so kdump (compressed) format, so we estimate patch set would be a
> >> relatively small size.
> >> 
> >> Could you tell me whether crash utility and makedumpfile can support
> >> the sadump format? If OK, we'll start to make patchset.
I think it's not bad to support sadump by makedumpfile. However I have 
several questions.
- Do you want to use makedumpfile to make an existing file that sadump has 
  dumped small?
- It isn't possible to support the same form as kdump-compressed format 
  now, is it?
- When the information that makedumpfile reads from a note of /proc/vmcore 
  (or a header of kdump-compressed format) is added by an extension of 
  makedumpfile, do you need to modify sadump?
Thanks
tachibana
> > 
> > Sure, yes, the crash utility can always support another dumpfile format.
> > 
> 
> Thanks. It helps a lot.
> 
> > It's unclear to me how similar SADUMP is to diskdump/compressed-kdump.
> > Does your internal version patch diskdump.c, or do you maintain your
> > own "sadump.c"?  I ask because if your patchset is at all intrusive,
> > I'd prefer it be kept in its own file, primarily for maintainability,
> > but also because SADUMP is essentially a black-box to anybody outside
> > Fujitsu.
> 
> What I meant when I used ``similar'' is both literally and
> logically. The format consists of diskdump header-like header, two
> kinds of bitmaps used for the same purpose as those in diskump format,
> and memory data. They can be handled in common with the existing data
> structure, diskdump_data, non-intrusively, so I hope they are placed
> in diskdump.c.
> 
> On the other hand, there's a code to be placed at such specific
> area. sadump is triggered depending on kdump's progress and so
> register values to be contained in vmcore varies according to the
> progress: If crash_notes has been initialized when sadump is
> triggered, sadump packs the register values in crash_notes; if not
> yet, packs registers gathered by firmware. This is sadump specific
> processing, so I think putting it in specific sadump.c file is a
> natural and reasonable choise.
> 
> Anyway, I have not made any patch set for this. I'll post a patch set
> when I complete.
> 
> Again, thanks a lot for the positive answer.
> 
> Thanks.
> HATAYAMA, Daisuke
> 
> 
> _______________________________________________
> kexec mailing list
> kexec(a)lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
                                
                         
                        
                                
                                1 year, 11 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        crash-7.3.2 very long list iteration progressively increasing memory usage
                                
                                
                                
                                    
                                        by David Wysochanski
                                    
                                
                                
                                        Hi Dave,
We have a fairly large vmcore (around 250GB) that has a very long kmem
cache we are trying to determine whether a loop exists in it.  The list
has literally billions of entries.  Before you roll your eyes hear me
out.
Just running the following command
crash> list -H 0xffff8ac03c81fc28 > list-yeller.txt
Seems to increase the memory of crash usage over time very
significantly, to the point that we have the following with top output:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                         
25522 dwysocha  20   0 11.2g  10g 5228 R 97.8 17.5   1106:34 crash    
                                                                       
                                              When I started the
command yesterday it was adding around 4 million entries to the file
per minute.  At the time I estimated the command would finish in around
10 hours and I could use it to determine if there was a loop in the
list or not.  But today has slowed down to less than 1/10th that, to
around 300k entries per minute.
Is this type of memory  usage with list enumeration expected or not?
I have not yet begun to delve into the code, but figured you might have
a gut feel whether this is expected and fixable or not.
Thanks.
                                
                         
                        
                                
                                7 years, 4 months
                        
                        
                 
         
 
        
            
        
        
        
                
                        
                                
                                
                                        
                                
                         
                        
                                
                                
                                        
                                                
                                        
                                        
                                        Redhat 5.11 dump
                                
                                
                                
                                    
                                        by Ilan Schwarts
                                    
                                
                                
                                        Hi,
A production machine with Red Hat Enterprise Linux Server release 5.11
(Tikanga) kernel 2.6.18-431.el5 crashed.
I received vmcore, Since I dont have access to machine, and I need to
open the vmcore to analyze the dump, I have downloaded CentOS 5.11,
since centos 5.11 is EOL, I downloaded the 2.6.18-431.el5 kernel
manually, with debuginfo and debuginfo-common and installed them, so
far - seems ok.
I am trying to get call stack of the dump time, i get error:
[root@centos511test kernel]# crash vmcore
/usr/lib/debug/lib/modules/`uname -r`/vmlinux
crash 5.1.8-3.el5.centos
Copyright (C) 2002-2011  Red Hat, Inc.
Copyright (C) 2004, 2005, 2006  IBM Corporation
Copyright (C) 1999-2006  Hewlett-Packard Co
Copyright (C) 2005, 2006  Fujitsu Limited
Copyright (C) 2006, 2007  VA Linux Systems Japan K.K.
Copyright (C) 2005  NEC Corporation
Copyright (C) 1999, 2002, 2007  Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002  Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions.  Enter "help copying" to see the conditions.
This program has absolutely no warranty.  Enter "help warranty" for details.
GNU gdb (GDB) 7.0
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...
crash: page excluded: kernel virtual address: ffffffff804d3ae0  type: "possible"
WARNING: cannot read cpu_possible_map
crash: page excluded: kernel virtual address: ffffffff804735e0  type: "present"
WARNING: cannot read cpu_present_map
crash: page excluded: kernel virtual address: ffffffff8046e260  type: "online"
WARNING: cannot read cpu_online_map
crash: page excluded: kernel virtual address: ffffffff80475210  type: "xtime"
[root@centos511test kernel]#
Any thoughts ? how can I continue ?
Thanks alot
                                
                         
                        
                                
                                7 years, 4 months