[PATCH] Fix segmentation fault
by Bernhard Walle
* Executing crash without any parameter results in a segmentation fault.
* Add a NULL check for pc->orig_namelist to avoid the segmentation fault.
Signed-off-by: Sachin Sant <sachinp(a)in.ibm.com>
Acked-by: Bernhard Walle <bwalle(a)suse.de>
1 file changed, 3 insertions(+)
symbols.c | 3 +++
15 years, 10 months
Re: [Crash-utility] [Patch] ia64 block_size mismatch issue.
by Dave Anderson
----- "Robin Holt" <holt(a)sgi.com> wrote:
> On Fri, Jan 09, 2009 at 01:22:39PM -0500, Dave Anderson wrote:
> > But for kdump vmcores, it seems that kdump_page_size() needs to
> > be made smarter, although I'm not sure where would it get the
> > page size -- vmcoreinfo?
>
> This was an ia64 kdump created vmcore. I am a little confused about the
> difference, but that little tiny bit, I do know.
>
> Robin
Now I'm confused -- if you're looking at an ia64 kdump vmcore, then
why did your patch modify the diskdump code?
Dave
15 years, 10 months
Re: [Crash-utility] [Patch] ia64 block_size mismatch issue.
by Dave Anderson
----- "Bernhard Walle" <bwalle(a)suse.de> wrote:
> * Robin Holt [2009-01-09 11:40]:
> >
> >
> > ia64 recently changed the default page size from 16KB to 64KB. Trying
> > to analyze a dump taken on a 64KB system on a 16KB page system fails.
> > Fix this problem by reallocating and rereading the header when block_size
> > mismatches.
>
> That's only one part of the problem. Your patch doesn't handle ELF
> dumps (netdump.c). There are also other parts that need update. I have
> a more complete patch that was also tested on PPC, too. Dave, should I
> port that patch to the current crash release or don't you want that
> changes in crash?
Robin's patch should be OK for diskdump vmcores, although I wasn't even
aware that the diskdump facility was actively being carried forward into
post-kdump kernels?
Red Hat stopped supporting diskdump when RHEL5 (2.6.18+) was released,
but presuming that RHEL6/ia64 inherits 64k pages, then this patch would
be needed to analyze RHEL3/RHEL4 diskdumps on a RHEL6 host.
But for kdump vmcores, it seems that kdump_page_size() needs to
be made smarter, although I'm not sure where would it get the
page size -- vmcoreinfo?
And FWIW, the default for ia64 xendumps is hardwared to 16k.
Dave
15 years, 10 months
[Patch] ia64 block_size mismatch issue.
by Robin Holt
ia64 recently changed the default page size from 16KB to 64KB. Trying
to analyze a dump taken on a 64KB system on a 16KB page system fails.
Fix this problem by reallocating and rereading the header when block_size
mismatches.
Signed-off-by: Robin Holt <holt(a)sgi.com>
---
diskdump.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
Index: crash-4.0-7.5/diskdump.c
===================================================================
--- crash-4.0-7.5.orig/diskdump.c 2008-12-05 09:06:09.000000000 -0600
+++ crash-4.0-7.5/diskdump.c 2009-01-08 11:46:53.193321876 -0600
@@ -107,12 +107,13 @@ static int read_dump_header(char *file)
struct disk_dump_sub_header *sub_header = NULL;
struct kdump_sub_header *sub_header_kdump = NULL;
int bitmap_len;
- const int block_size = (int)sysconf(_SC_PAGESIZE);
+ int block_size = (int)sysconf(_SC_PAGESIZE);
off_t offset;
const off_t failed = (off_t)-1;
ulong pfn;
int i, j, max_sect_len;
+reread_block_size:
if (block_size < 0)
return FALSE;
@@ -147,6 +148,14 @@ static int read_dump_header(char *file)
goto err;
}
+ if (header->block_size != block_size) {
+ block_size = header->block_size;
+ free(header);
+ goto reread_block_size;
+ }
+ dd->block_size = block_size;
+ dd->block_shift = ffs(block_size) - 1;
+
if (CRASHDEBUG(1))
fprintf(fp, "%s: header->utsname.machine: %s\n",
DISKDUMP_VALID() ? "diskdump" : "compressed kdump",
@@ -165,15 +174,6 @@ static int read_dump_header(char *file)
machine_type_mismatch(file, "PPC64", NULL, 0))
goto err;
- if (header->block_size != block_size) {
- error(INFO, "%s: block size in the dump header does not match"
- " with system page size\n",
- DISKDUMP_VALID() ? "diskdump" : "compressed kdump");
- goto err;
- }
- dd->block_size = block_size;
- dd->block_shift = ffs(block_size) - 1;
-
if (sizeof(*header) + sizeof(void *) * header->nr_cpus > block_size ||
header->nr_cpus <= 0) {
error(INFO, "%s: invalid nr_cpus value: %d\n",
15 years, 10 months
crash version 4.0-7.6 is available
by Dave Anderson
- Fix for initialization-time failure if the kernel was built without
CONFIG_SWAP. Without the patch, it would fail during initialization
with the error: "crash: cannot resolve: nr_swapfiles"
(anderson(a)redhat.com)
- Fix for the "bt" command when run on x86_64 kernels that contain the
x86/x86_64 merger patch. Without the patch, non-active (blocked)
tasks do not start with "schedule", and as a result may contain
stale frame entries. (anderson(a)redhat.com)
- Fix for the usage of an input file of commands redirected during
runtime via "<", where more than one command in the input file
results in a fatal error. Without the patch, the handling of the
input file would go into an infinite loop repeatedly running the
second failed command. (anderson(a)redhat.com)
- Clean up causes for warning messages when compiling with gcc 4.3.2.
(anderson(a)redhat.com)
- Fix to prevent a segmentation violation during initialization when
parsing (corrupted) module symbols. Without the patch, if a kernel
module's Elf32_Sym/Elf64_Sym data structure contains a corrupt
"st_index" field, the resultant string table access could cause a
segmentation violation. (anderson(a)redhat.com)
- If an active task experiences a kernel stack overflow, the task's
thread_info structure located at the very bottom of the stack will
likely have its "cpu" field corrupted. Without the patch, any task
with a corrupt cpu value is not accepted, and the error message
"crash: invalid task: <task-address>" is displayed. With the
patch, an active task will be accepted based upon its existence as
the current task in a per-cpu runqueue structure, and there will be
a warning message indicating that the cpu value is corrupt.
(anderson(a)redhat.com)
- Modification of the the "files" command when a task has an open file
referenced by a file descriptor, but the file structure's f_dentry
field is NULL. This is a kernel error condition, but without this
patch the "files" command does not display anything for that file
descriptor, as if the file has been closed or is not in use. This
patch displays the file descriptor number and the file structure's
virtual address. (anderson(a)redhat.com)
- Fix for the "bt" command on x86 Xen architectures when the backtrace
starts on the hard IRQ stack. Without the patch, the backtrace
may not properly make the transition back to the process stack
with the error message "bt: invalid stack address for this task",
or it may cause a segmentation violation. (anderson(a)redhat.com)
Download from: http://people.redhat.com/anderson
15 years, 10 months