target compilation?
by Jun Koi
Hi,
I looked at configure.c, and find some code like this:
void
get_current_configuration(void)
{
FILE *fp;
static char buf[512];
char *p;
#ifdef __alpha__
target_data.target = ALPHA;
#endif
#ifdef __i386__
target_data.target = X86;
#endif
#ifdef __powerpc__
target_data.target = PPC;
#endif
#ifdef __ia64__
target_data.target = IA64;
#endif
....
}
I have few questions:
- Is it correct that the above code want to find out the architecture
(means target here) we are compiling our code on?
- Who defined those architectures in the above code, like "__i386__"
(in the check "#ifdef __i386__")? I guessed that the architecture is
defined in a particular prototype file in /usr/include, but cannot
find anything there. So I think that those macros are defined by
compilation process of crash, but again I dont see anywhere in the
source doing that.
Thanks,
J
target we want
16 years, 2 months
question on some command params
by Jun Koi
Hi,
I found below cmdline params having no documentation anywhere, so
could somebody explain their meaning?
- memory_module
- no_modules
- no_ikconfig
- no_namelist_gzip
- no_kmem_cache
- kmem_cache_delay
- readnow
- buildinfo
- zero_excluded
Many thanks,
J
16 years, 2 months
A few more xen hypervisor fixes/updates
by Dave Anderson
Hello Oda-san,
Here are a couple more xen hypervisor specific patches
that I plan to apply to the next release.
The first patch is for the "bt" command when run on the
xen hypervisor. The patch will reject "bt -o", "bt -O",
"bt -e" and "bt -E" with "option not supported" messages.
Additionally, it fixes "bt -R" so that it works correctly,
and does not cause a segmentation violation.
The second patch removes "foreach" from the list of xen
hypervisor commands. It could never have possibly worked,
and if you attempted to actually use it, it would silently
fail.
Unless you have any objections, they are queued for
the next release.
Thanks,
Dave
16 years, 2 months
Re: [Crash-utility] PRE_GDB/POST_GDB initialization?
by anderson@prospeed.net
> Hi,
>
> We are doing some initializations in 2 phases: pre-gdb and post-gdb.
> Why is that necessary? Can we do that in 1 phase only?
No.
>
> I guess the post-init step is needed because it cannot be done before
> gdb is initialized, which is of course done between pre-gdb and
> post-gdb. I looked at the code, but dont see anything special that can
> answer this question. But I am sure that I missed something there. Any
> pointer?
>
Look at main(). The things that are done in the per-arch PRE_GDB
machdep_init() functions are required for things like virtual-to-physical
address translation of unity-mapped kernel virtual addresses. Without
doing them first, read_in_kernel_config() and/or kernel_init() could not
run at all, which is where the first readmem() calls are made. Later on,
after gdb has run, further datatype-specific information is gathered in
the POST_GDB sections.
Dave Anderson
> Many thanks,
> J
>
>
>
> ------------------------------
>
> --
> Crash-utility mailing list
> Crash-utility(a)redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
>
>
> End of Crash-utility Digest, Vol 36, Issue 19
> *********************************************
>
16 years, 3 months
PRE_GDB/POST_GDB initialization?
by Jun Koi
Hi,
We are doing some initializations in 2 phases: pre-gdb and post-gdb.
Why is that necessary? Can we do that in 1 phase only?
I guess the post-init step is needed because it cannot be done before
gdb is initialized, which is of course done between pre-gdb and
post-gdb. I looked at the code, but dont see anything special that can
answer this question. But I am sure that I missed something there. Any
pointer?
Many thanks,
J
16 years, 3 months
Xen stack traces using crash ...
by Steve Ofsthun
I'm having trouble examining xen stack traces using crash. I have a full xen dump image and can examine the stack traces for any active vcpu. But when I try to trace a non-active vcpu, I seem to get a copy of the stack trace for the last pcpu the non-active vcpu ran on.
For example:
crash> vcpus
VCID PCID VCPU ST T DOMID DOMAIN
0 0 ffff8300dfdfc080 RU I 32767 ffff8300dc250080
> 1 1 ffff8300dbe02080 RU I 32767 ffff8300dc250080
> 2 2 ffff8300dc24e080 RU I 32767 ffff8300dc250080
> 3 3 ffff8300dfdfa080 RU I 32767 ffff8300dc250080
> 4 4 ffff8300dbe06080 RU I 32767 ffff8300dc250080
> 5 5 ffff8300dbefa080 RU I 32767 ffff8300dc250080
> 6 6 ffff8300dc27e080 RU I 32767 ffff8300dc250080
> 7 7 ffff8300dc27a080 RU I 32767 ffff8300dc250080
> 8 8 ffff8300dc266080 RU I 32767 ffff8300dc250080
> 9 9 ffff8300dc262080 RU I 32767 ffff8300dc250080
> 10 10 ffff8300dfdcc080 RU I 32767 ffff8300dc250080
>* 11 11 ffff8300dfdc8080 RU I 32767 ffff8300dc250080
> 12 12 ffff8300dbe34080 RU I 32767 ffff8300dc250080
> 13 13 ffff8300dbe30080 RU I 32767 ffff8300dc250080
> 14 14 ffff8300dbedc080 RU I 32767 ffff8300dc250080
> 15 15 ffff8300dbed8080 RU I 32767 ffff8300dc250080
> 0 0 ffff8300dfd80080 RU 0 0 ffff8300dfd90080
1 1 ffff8300dbe7e080 BL 0 0 ffff8300dfd90080
2 2 ffff8300dbe7c080 BL 0 0 ffff8300dfd90080
3 3 ffff8300dbe7a080 BL 0 0 ffff8300dfd90080
4 4 ffff8300dbe78080 BL 0 0 ffff8300dfd90080
5 5 ffff8300dbe74080 BL 0 0 ffff8300dfd90080
6 6 ffff8300dbe72080 BL 0 0 ffff8300dfd90080
7 7 ffff8300dbe70080 BL 0 0 ffff8300dfd90080
8 8 ffff8300dbe6e080 BL 0 0 ffff8300dfd90080
9 9 ffff8300dbe6c080 BL 0 0 ffff8300dfd90080
10 10 ffff8300dbe6a080 BL 0 0 ffff8300dfd90080
11 11 ffff8300dbe68080 BL 0 0 ffff8300dfd90080
12 12 ffff8300dbe66080 BL 0 0 ffff8300dfd90080
13 13 ffff8300dbe64080 BL 0 0 ffff8300dfd90080
14 14 ffff8300dbe62080 BL 0 0 ffff8300dfd90080
15 15 ffff8300dbe60080 BL 0 0 ffff8300dfd90080
0 11 ffff8300dbe52080 BL U 7 ffff8300dbfc0080
crash> bt 11
PCPU: 11 VCPU: ffff8300dfdc8080
#0 [ffff8300dbe3fbe0] kexec_crash at ffff828c8010cd8c
#1 [ffff8300dbe3fc00] panic at ffff828c801205fe
#2 [ffff8300dbe3fcc0] show_stack at ffff828c8013c1f3
#3 [ffff8300dbe3fcf0] do_general_protection at ffff828c8013f07e
#4 [ffff8300dbe3fd10] vpic_get_highest_priority_irq at ffff828c8015a2a3
#5 [ffff8300dbe3fd48] svm_asid_inc_generation at ffff828c8015c95e
#6 [ffff8300dbe3fd70] vpic_intercept_pic_io at ffff828c8015ae9e
#7 [ffff8300dbe3fd98] __context_switch at ffff828c8012570e
#8 [ffff8300dbe3fe20] handle_exception_saved at ffff828c801884be
#9 [ffff8300dbe3fea8] restore_all_xen at ffff828c8018809b
#10 [ffff8300dbe3fed8] idle_loop at ffff828c80125c2f
crash>
This is the vcpu that crashed Xen.
Now I want to examine the vcpu for domain 7, since I know that vcpu triggered the fatal crash.
crash> bt ffff8300dbe52080
PCPU: 11 VCPU: ffff8300dbe52080
#0 [ffff8300dbe3fbe0] kexec_crash at ffff828c8010cd8c
#1 [ffff8300dbe3fc00] panic at ffff828c801205fe
#2 [ffff8300dbe3fcc0] show_stack at ffff828c8013c1f3
#3 [ffff8300dbe3fcf0] do_general_protection at ffff828c8013f07e
#4 [ffff8300dbe3fd10] vpic_get_highest_priority_irq at ffff828c8015a2a3
#5 [ffff8300dbe3fd48] svm_asid_inc_generation at ffff828c8015c95e
#6 [ffff8300dbe3fd70] vpic_intercept_pic_io at ffff828c8015ae9e
#7 [ffff8300dbe3fd98] __context_switch at ffff828c8012570e
#8 [ffff8300dbe3fe20] handle_exception_saved at ffff828c801884be
#9 [ffff8300dbe3fea8] restore_all_xen at ffff828c8018809b
#10 [ffff8300dbe3fed8] idle_loop at ffff828c80125c2f
crash>
This trace is the same stack page as the idle vcpu running on pcpu 11.
Any ideas?
Thanks,
Steve
16 years, 3 months
faulty use of "set" command when running against the xen hypervisor
by Dave Anderson
Cai Quan bumped into another problem when running against the xen hypervisor,
where entering the "set" command alone with no arguments generates a SIGSEGV.
I also note that "set -c #" and "set -p" options make no sense either:
# crash --xen_phys_start 3ee00000 xen-syms-2.6.18-92.el5 vmcore
crash 4.0-7.2
Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006 IBM Corporation
Copyright (C) 1999-2006 Hewlett-Packard Co
Copyright (C) 2005, 2006 Fujitsu Limited
Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
Copyright (C) 2005 NEC Corporation
Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
This program is free software, covered by the GNU General Public License,
and you are welcome to change it and/or distribute copies of it under
certain conditions. Enter "help copying" to see the conditions.
This program has absolutely no warranty. Enter "help warranty" for details.
GNU gdb 6.1
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...
KERNEL: xen-syms-2.6.18-92.el5
DEBUGINFO: ./xen-syms-2.6.18-92.el5.debug
DUMPFILE: vmcore
CPUS: 2
DOMAINS: 4
UPTIME: 00:34:18
MACHINE: Intel(R) Pentium(R) 4 CPU 3.40GHz (3400 Mhz)
MEMORY: 1 GB
PCPU-ID: 1
PCPU: ffff83003f05ff28
VCPU-ID: 1
VCPU: ffff83003eef6080 (VCPU_RUNNING)
DOMAIN-ID: 0
DOMAIN: ffff83003eef8080 (DOMAIN_RUNNING)
STATE: CRASH
crash> set -c 0
set: invalid cpu number: system has only 0 cpu
crash> set -p
set: no panic task found!
crash> set
Segmentation fault
#
Admittedly it's a nonsensical usages of "set", since there's no concept of
a PID "context" in the hypervisor.
The attached patch changes the behavior to:
crash> set -c 0
set: -c option not supported on this architecture or kernel
crash> set -p
set: -p option not supported on this architecture or kernel
crash> set
set: requires an option with the Xen hypervisor
crash>
Itsura, are you OK with the attached patch?
Thanks,
Dave
16 years, 3 months
Updated SPU extension
by André Detsch
Hi,
Here is an updated version of the spu crash extension. It is meant to
replace the version at:
http://people.redhat.com/anderson/extensions/spu.c
Sending a diff as well, for reference.
The previous version was trying to access the spu context field even
when the field is == NULL (which is true in idle physicals spus). That
bug leads to the following error message:
crash> spus
NODE 0:
ID SPUADDR SPUSTATUS CTXADDR CTXSTATE PID
spus: invalid kernel virtual address: 858 type: "print_spu_header get
ctxstate"
Besides fixing this issue, the patch also cleans up some trailing white
spaces.
PS: Lucio, the original author, is still working at IBM, but on another
project. So, I'm taking care of the extension for him.
Cheers,
--
André Detsch
Kernel Software Engineer - Linux on Cell
Linux Technology Center Brazil
--- spu.c.orig 2007-09-21 15:42:20.000000000 -0300
+++ spu.c 2008-07-22 19:30:58.000000000 -0300
@@ -208,13 +208,13 @@
/*
- * Returns a pointer to the requested SPU field
+ * Returns a pointer to the requested SPU field
*/
ulong get_spu_addr(ulong spu_info)
{
ulong spu_addr;
- readmem(spu_info + CBE_OFFSET(crash_spu_info, spu), KVADDR, &spu_addr,
+ readmem(spu_info + CBE_OFFSET(crash_spu_info, spu), KVADDR, &spu_addr,
sizeof(spu_addr), "get_spu_addr", FAULT_ON_ERROR);
return spu_addr;
@@ -252,8 +252,8 @@
cast(debug_data + offset)); \
} while(0)
-/*
- * Print the spu and spu_context structs fields. Some SPU memory-mapped IO
+/*
+ * Print the spu and spu_context structs fields. Some SPU memory-mapped IO
* registers are taken directly from crash_spu_info.
*/
void print_ctx_info(char *ctx_data, char *spu_data, int info)
@@ -314,6 +314,9 @@
long size, offset;
ulong spu_addr, addr;
+ if (!ctx_addr)
+ return;
+
spu_data = NULL;
info = 0;
@@ -321,7 +324,7 @@
ctx_data = GETBUF(size);
if (!ctx_data)
error(FATAL, "Couldn't allocate memory for ctx.\n");
- readmem(ctx_addr, KVADDR, ctx_data, size, "show_ctx_info ctx",
+ readmem(ctx_addr, KVADDR, ctx_data, size, "show_ctx_info ctx",
FAULT_ON_ERROR);
spu_addr = *(ulong *)(ctx_data + CBE_OFFSET(spu_context, spu));
@@ -359,17 +362,18 @@
int i, j, cnt;
long prio_size, prio_runq_off, ctx_rq_off, jump, offset, ctxs_size;
char *u_spu_prio;
- ulong spu_prio_addr, k_spu_prio, kvaddr, uvaddr, addr, ctx;
+ ulong spu_prio_addr, k_spu_prio, kvaddr, uvaddr, spu_addr, ctx_addr;
ulong *ctxs;
ulong list_head[2];
struct list_data list_data, *ld;
/* Walking SPUs */
for (i = 0; i < NR_SPUS; i++) {
- addr = get_spu_addr(spu[i]) + CBE_OFFSET(spu, ctx);
- readmem(addr, KVADDR, &ctx, sizeof(ctx), "show_ctx_info_all",
- FAULT_ON_ERROR);
- show_ctx_info(ctx);
+ spu_addr = get_spu_addr(spu[i]) + CBE_OFFSET(spu, ctx);
+ readmem(spu_addr, KVADDR, &ctx_addr, sizeof(ctx_addr),
+ "show_ctx_info_all", FAULT_ON_ERROR);
+ if (ctx_addr)
+ show_ctx_info(ctx_addr);
}
/* Walking SPU runqueue */
@@ -387,7 +391,7 @@
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
- readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
+ readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
@@ -470,7 +474,7 @@
/* Testing for SPU ID */
if ((dvalue >= 0) && (dvalue < NR_SPUS)) {
addr = get_spu_addr(spu[dvalue]) + CBE_OFFSET(spu, ctx);
- readmem(addr, KVADDR, &ctx, sizeof(ctx),
+ readmem(addr, KVADDR, &ctx, sizeof(ctx),
"str_to_spuctx ID", FAULT_ON_ERROR);
type = STR_SPU_ID;
@@ -481,9 +485,9 @@
else {
/* Testing for PID */
for (i = 0; i < NR_SPUS; i++) {
- addr = get_spu_addr(spu[i]) +
+ addr = get_spu_addr(spu[i]) +
CBE_OFFSET(spu, pid);
- readmem(addr, KVADDR, &pid, sizeof(pid),
+ readmem(addr, KVADDR, &pid, sizeof(pid),
"str_to_spuctx PID", FAULT_ON_ERROR);
if (dvalue == pid) {
@@ -506,7 +510,7 @@
/* Testing for spuctx address on SPUs */
for (i = 0; i < NR_SPUS; i++) {
addr = get_spu_addr(spu[i]) + CBE_OFFSET(spu, ctx);
- readmem(addr, KVADDR, &ctx, sizeof(ctx),
+ readmem(addr, KVADDR, &ctx, sizeof(ctx),
"str_to_spuctx CTX", FAULT_ON_ERROR);
if (hvalue == ctx) {
@@ -520,7 +524,7 @@
/* Testing for spuctx address on SPU runqueue */
if (symbol_exists("spu_prio")) {
spu_prio_addr = symbol_value("spu_prio");
- readmem(spu_prio_addr, KVADDR, &k_spu_prio,
+ readmem(spu_prio_addr, KVADDR, &k_spu_prio,
sizeof(k_spu_prio), "runq_array", FAULT_ON_ERROR);
}
else
@@ -532,7 +536,7 @@
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
- readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size,
+ readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size,
"get_runq_ctxs", FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
@@ -563,7 +567,7 @@
ctxs_size = cnt * sizeof(ulong);
ctxs = (ulong *)GETBUF(ctxs_size);
-
+
BZERO(ctxs, ctxs_size);
cnt = retrieve_list(ctxs, cnt);
hq_close();
@@ -587,8 +591,8 @@
return type;
}
-/*
- * spuctx command stands for "spu context" and shows the context fields
+/*
+ * spuctx command stands for "spu context" and shows the context fields
* for the spu or respective struct address passed as an argument
*/
void cmd_spuctx()
@@ -598,7 +602,7 @@
ulong *ctxlist;
while ((c = getopt(argcnt, args, "")) != EOF) {
- switch(c)
+ switch(c)
{
default:
argerrs++;
@@ -619,7 +623,7 @@
while (args[optind]) {
if (IS_A_NUMBER(args[optind])) {
- switch (str_to_spuctx(args[optind], &value, &ctx))
+ switch (str_to_spuctx(args[optind], &value, &ctx))
{
case STR_SPU_ID:
case STR_SPU_PID:
@@ -634,7 +638,7 @@
}
}
else
- error(INFO, "Invalid SPU reference: %s\n",
+ error(INFO, "Invalid SPU reference: %s\n",
args[optind]);
optind++;
}
@@ -662,39 +666,44 @@
const char *state_str;
if (spu_info) {
- readmem(spu_info + CBE_OFFSET(crash_spu_info,
+ readmem(spu_info + CBE_OFFSET(crash_spu_info,
saved_spu_status_R), KVADDR, &status, sizeof(status),
"print_spu_header: get status", FAULT_ON_ERROR);
size = CBE_SIZE(spu);
spu_data = GETBUF(size);
spu_addr = get_spu_addr(spu_info);
- readmem(spu_addr, KVADDR, spu_data, size, "SPU struct",
+ readmem(spu_addr, KVADDR, spu_data, size, "SPU struct",
FAULT_ON_ERROR);
id = *(int *)(spu_data + CBE_OFFSET(spu, number));
- ctx_addr = *(ulong *)(spu_data + CBE_OFFSET(spu, ctx));
pid = *(int *)(spu_data + CBE_OFFSET(spu, pid));
+ ctx_addr = *(ulong *)(spu_data + CBE_OFFSET(spu, ctx));
- readmem(ctx_addr + CBE_OFFSET(spu_context, state), KVADDR,
- &state, sizeof(state), "print_spu_header get ctxstate",
- FAULT_ON_ERROR);
-
- switch (state) {
- case 0: /* SPU_STATE_RUNNABLE */
- state_str = "RUNNABLE";
- break;
-
- case 1: /* SPU_STATE_SAVED */
- state_str = " SAVED ";
- break;
+ if (ctx_addr) {
+ readmem(ctx_addr + CBE_OFFSET(spu_context, state),
+ KVADDR, &state, sizeof(state),
+ "print_spu_header get ctxstate", FAULT_ON_ERROR);
+
+ switch (state) {
+ case 0: /* SPU_STATE_RUNNABLE */
+ state_str = "RUNNABLE";
+ break;
+
+ case 1: /* SPU_STATE_SAVED */
+ state_str = " SAVED ";
+ break;
- default:
- state_str = "UNKNOWN ";
+ default:
+ state_str = "UNKNOWN ";
+ }
+ }
+ else {
+ state_str = " - ";
}
- fprintf(fp, "%2d %16lx %s %16lx %s %5d\n", id,
- spu_addr,
+ fprintf(fp, "%2d %16lx %s %16lx %s %5d\n", id,
+ spu_addr,
status % 2 ? "RUNNING" : (ctx_addr ? "STOPPED" : " IDLE "),
ctx_addr, state_str, pid);
@@ -747,7 +756,7 @@
}
/*
- * spus stands for "spu state" and shows what contexts are running in what
+ * spus stands for "spu state" and shows what contexts are running in what
* SPU.
*/
void cmd_spus()
@@ -792,7 +801,7 @@
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
- readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
+ readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
@@ -829,7 +838,7 @@
}
/*
- * spurq stands for "spu run queue" and shows info about the contexts
+ * spurq stands for "spu run queue" and shows info about the contexts
* that are on the SPU run queue
*/
void cmd_spurq()
@@ -868,7 +877,7 @@
SPUCTX_CMD_NAME,
"shows complete info about a SPU context",
"[ID | PID | CTXADDR] ...",
-
+
" This command shows the fields of spu and spu_context structs for a ",
"SPU context, including debug info specially saved by kdump after a ",
"crash.",
@@ -948,7 +957,7 @@
" saved_spu_status_R = 1",
" saved_spu_npc_RW = 0",
- "\n Show info about the context whose struct spu_context address is ",
+ "\n Show info about the context whose struct spu_context address is ",
"0xc00000003dcbed80:\n",
"crash> spuctx 0x00000003dcbed80",
" ...",
@@ -995,7 +1004,7 @@
SPURQ_CMD_NAME,
"shows contexts on the SPU runqueue",
" ",
- " This command shows info about all contexts waiting for execution ",
+ " This command shows info about all contexts waiting for execution ",
"in the SPU runqueue. No parameter is needed.",
"\nEXAMPLE",
" Show SPU runqueue:",
/* spu.c - commands for viewing Cell/B.E. SPUs data
*
* (C) Copyright 2007 IBM Corp.
*
* Author: Lucio Correia <luciojhc(a)br.ibm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "defs.h"
#define NR_SPUS (16) /* Enough for current hardware */
#define MAX_PRIO (140)
#define MAX_PROPERTY_NAME (64)
#define STR_SPU_INVALID (0x0)
#define STR_SPU_ID (0x1)
#define STR_SPU_PID (0x2)
#define STR_SPU_CTX_ADDR (0x8)
#define SPUCTX_CMD_NAME "spuctx"
#define SPUS_CMD_NAME "spus"
#define SPURQ_CMD_NAME "spurq"
struct cbe_size_table;
struct cbe_offset_table;
void init_cbe_size_table(void);
void init_cbe_offset_table(void);
ulong get_spu_addr(ulong spu_info);
void cmd_spus(void);
void cmd_spurq(void);
void cmd_spuctx(void);
char *help_spus[];
char *help_spurq[];
void show_spu_state(ulong spu);
void dump_spu_runq(ulong k_prio_array);
char *help_spuctx[];
void show_ctx_info(ulong ctx_addr);
void print_ctx_info(char *ctx_data, char *spu_data, int info);
void show_ctx_info_all(void);
static struct command_table_entry command_table[] = {
SPUCTX_CMD_NAME, cmd_spuctx, help_spuctx, 0,
SPUS_CMD_NAME, cmd_spus, help_spus, 0,
SPURQ_CMD_NAME, cmd_spurq, help_spurq, 0,
NULL
};
struct cbe_size_table {
long crash_spu_info;
long spu;
long spu_context;
long spu_prio_array;
long list_head;
} cbe_size_table;
struct cbe_offset_table {
long crash_spu_info_spu;
long crash_spu_info_saved_mfc_sr1_RW;
long crash_spu_info_saved_mfc_dar;
long crash_spu_info_saved_mfc_dsisr;
long crash_spu_info_saved_spu_runcntl_RW;
long crash_spu_info_saved_spu_status_R;
long crash_spu_info_saved_spu_npc_RW;
long spu_node;
long spu_number;
long spu_ctx;
long spu_pid;
long spu_name;
long spu_slb_replace;
long spu_mm;
long spu_timestamp;
long spu_class_0_pending;
long spu_problem;
long spu_priv2;
long spu_flags;
long spu_context_spu;
long spu_context_state;
long spu_context_prio;
long spu_context_local_store;
long spu_context_rq;
long spu_prio_array_runq;
} cbe_offset_table;
#define CBE_SIZE(X) (cbe_size_table.X)
#define CBE_OFFSET(X, Y) (cbe_offset_table.X##_##Y)
#define CBE_SIZE_INIT(X, Y) \
do { \
cbe_size_table.X = STRUCT_SIZE(Y); \
if (cbe_size_table.X == -1) \
error(FATAL, "Couldn't get %s size.\n", Y); \
} while(0)
#define CBE_OFFSET_INIT(X, Y, Z) \
do { \
cbe_offset_table.X = MEMBER_OFFSET(Y, Z); \
if (cbe_offset_table.X == -1) \
error(FATAL, "Couldn't get %s.%s offset.\n", Y, Z); \
} while(0)
ulong spu[NR_SPUS];
/*****************************************************************************
* INIT FUNCTIONS
*/
/*
* Read kernel virtual addresses of crash_spu_info data stored by kdump
*/
void init_cbe_size_table(void)
{
CBE_SIZE_INIT(crash_spu_info, "crash_spu_info");
CBE_SIZE_INIT(spu, "spu");
CBE_SIZE_INIT(spu_context, "spu_context");
CBE_SIZE_INIT(spu_prio_array, "spu_prio_array");
CBE_SIZE_INIT(list_head, "list_head");
}
void init_cbe_offset_table(void)
{
CBE_OFFSET_INIT(crash_spu_info_spu, "crash_spu_info", "spu");
CBE_OFFSET_INIT(crash_spu_info_saved_mfc_sr1_RW, "crash_spu_info",
"saved_mfc_sr1_RW");
CBE_OFFSET_INIT(crash_spu_info_saved_mfc_dar, "crash_spu_info",
"saved_mfc_dar");
CBE_OFFSET_INIT(crash_spu_info_saved_mfc_dsisr, "crash_spu_info",
"saved_mfc_dsisr");
CBE_OFFSET_INIT(crash_spu_info_saved_spu_runcntl_RW, "crash_spu_info",
"saved_spu_runcntl_RW");
CBE_OFFSET_INIT(crash_spu_info_saved_spu_status_R, "crash_spu_info",
"saved_spu_status_R");
CBE_OFFSET_INIT(crash_spu_info_saved_spu_npc_RW, "crash_spu_info",
"saved_spu_npc_RW");
CBE_OFFSET_INIT(spu_node, "spu", "node");
CBE_OFFSET_INIT(spu_number, "spu", "number");
CBE_OFFSET_INIT(spu_ctx, "spu", "ctx");
CBE_OFFSET_INIT(spu_pid, "spu", "pid");
CBE_OFFSET_INIT(spu_name, "spu", "name");
CBE_OFFSET_INIT(spu_slb_replace, "spu", "slb_replace");
CBE_OFFSET_INIT(spu_mm, "spu", "mm");
CBE_OFFSET_INIT(spu_timestamp, "spu", "timestamp");
CBE_OFFSET_INIT(spu_class_0_pending, "spu", "class_0_pending");
CBE_OFFSET_INIT(spu_problem, "spu", "problem");
CBE_OFFSET_INIT(spu_priv2, "spu", "priv2");
CBE_OFFSET_INIT(spu_flags, "spu", "flags");
CBE_OFFSET_INIT(spu_context_spu, "spu_context", "spu");
CBE_OFFSET_INIT(spu_context_state, "spu_context", "state");
CBE_OFFSET_INIT(spu_context_prio, "spu_context", "prio");
CBE_OFFSET_INIT(spu_context_local_store, "spu_context", "local_store");
CBE_OFFSET_INIT(spu_context_rq, "spu_context", "rq");
CBE_OFFSET_INIT(spu_prio_array_runq, "spu_prio_array", "runq");
}
void get_crash_spu_info(void)
{
int i;
ulong addr;
long struct_size;
addr = symbol_value("crash_spu_info");
struct_size = CBE_SIZE(crash_spu_info);
for (i = 0; i < NR_SPUS; i++)
spu[i] = addr + (i * struct_size);
}
_init()
{
int i, n_registered;
struct command_table_entry *cte;
init_cbe_size_table();
init_cbe_offset_table();
for (i = 0; i < NR_SPUS; i++)
spu[i] = 0;
register_extension(command_table);
get_crash_spu_info();
}
_fini() { }
/*****************************************************************************
* BASIC FUNCTIONS
*/
/*
* Returns a pointer to the requested SPU field
*/
ulong get_spu_addr(ulong spu_info)
{
ulong spu_addr;
readmem(spu_info + CBE_OFFSET(crash_spu_info, spu), KVADDR, &spu_addr,
sizeof(spu_addr), "get_spu_addr", FAULT_ON_ERROR);
return spu_addr;
}
/*****************************************************************************
* SPUCTX COMMAND
*/
#define DUMP_WIDTH 23
#define DUMP_SPU_NAME \
do { \
fprintf(fp, " %-*s = %s\n", DUMP_WIDTH, "name", name_str); \
} while(0)
#define DUMP_SPU_FIELD(format, field, cast) \
do { \
offset = CBE_OFFSET(spu, field); \
fprintf(fp, " %-*s = "format"\n", DUMP_WIDTH, #field, \
cast(spu_data + offset)); \
} while(0)
#define DUMP_CTX_FIELD(format, field, cast) \
do { \
offset = CBE_OFFSET(spu_context, field); \
fprintf(fp, " %-*s = "format"\n", DUMP_WIDTH, #field, \
cast(ctx_data + offset)); \
} while(0)
#define DUMP_DBG_FIELD(format, field, cast) \
do { \
offset = CBE_OFFSET(crash_spu_info, field); \
fprintf(fp, " %-*s = "format"\n", DUMP_WIDTH, #field, \
cast(debug_data + offset)); \
} while(0)
/*
* Print the spu and spu_context structs fields. Some SPU memory-mapped IO
* registers are taken directly from crash_spu_info.
*/
void print_ctx_info(char *ctx_data, char *spu_data, int info)
{
long offset, size;
char *name_str, *debug_data;
DUMP_CTX_FIELD("%d", state, *(int *));
DUMP_CTX_FIELD("%d", prio, *(int *));
DUMP_CTX_FIELD("%p", local_store, *(ulong *));
DUMP_CTX_FIELD("%p", rq, *(ulong *));
if (spu_data) {
offset = CBE_OFFSET(spu, name);
size = MAX_PROPERTY_NAME * sizeof(char);
name_str = (char *)GETBUF(size);
readmem(*(ulong *)(spu_data + offset), KVADDR, name_str, size,
"name_str", FAULT_ON_ERROR);
DUMP_SPU_NAME;
FREEBUF(name_str);
DUMP_SPU_FIELD("%d", node, *(int *));
DUMP_SPU_FIELD("%d", number, *(int *));
DUMP_SPU_FIELD("%d", pid, *(int *));
DUMP_SPU_FIELD("0x%x", slb_replace, *(unsigned int *));
DUMP_SPU_FIELD("%p", mm, *(ulong *));
DUMP_SPU_FIELD("%p", timestamp, *(long long *));
DUMP_SPU_FIELD("%d", class_0_pending, *(int *));
DUMP_SPU_FIELD("%p", problem, *(ulong *));
DUMP_SPU_FIELD("%p", priv2, *(ulong *));
DUMP_SPU_FIELD("0x%lx", flags, *(ulong *));
size = CBE_SIZE(crash_spu_info);
debug_data = (char *)GETBUF(size);
readmem(spu[info], KVADDR, debug_data, size, "debug_data",
FAULT_ON_ERROR);
DUMP_DBG_FIELD("0x%lx", saved_mfc_sr1_RW, *(ulong *));
DUMP_DBG_FIELD("0x%lx", saved_mfc_dar, *(ulong *));
DUMP_DBG_FIELD("0x%lx", saved_mfc_dsisr, *(ulong *));
DUMP_DBG_FIELD("0x%x", saved_spu_runcntl_RW, *(uint *));
DUMP_DBG_FIELD("0x%x", saved_spu_status_R, *(uint *));
DUMP_DBG_FIELD("0x%x", saved_spu_npc_RW, *(uint *));
FREEBUF(debug_data);
}
}
/*
* Pass ctx and respective spu data to print_ctx_info for the contexts in
* ctx_addr list (chosen contexts).
*/
void show_ctx_info(ulong ctx_addr)
{
int number, info, i;
char *ctx_data, *spu_data;
long size, offset;
ulong spu_addr, addr;
if (!ctx_addr)
return;
spu_data = NULL;
info = 0;
size = CBE_SIZE(spu_context);
ctx_data = GETBUF(size);
if (!ctx_data)
error(FATAL, "Couldn't allocate memory for ctx.\n");
readmem(ctx_addr, KVADDR, ctx_data, size, "show_ctx_info ctx",
FAULT_ON_ERROR);
spu_addr = *(ulong *)(ctx_data + CBE_OFFSET(spu_context, spu));
if (spu_addr) {
size = CBE_SIZE(spu);
spu_data = GETBUF(size);
if (!spu_data)
error(FATAL, "Couldn't allocate memory for spu.\n");
readmem(spu_addr, KVADDR, spu_data, size, "show_ctx_info spu",
FAULT_ON_ERROR);
for (i = 0; i < NR_SPUS; i++) {
readmem(spu[i], KVADDR, &addr, sizeof(addr), "spu addr",
FAULT_ON_ERROR);
if (addr == spu_addr)
info = i;
}
}
fprintf(fp,"\nDumping context fields for spu_context %lx:\n", ctx_addr);
print_ctx_info(ctx_data, spu_data, info);
FREEBUF(ctx_data);
if (spu_addr)
FREEBUF(spu_data);
}
/*
* Pass ctx and respective spu data to show_ctx_info for all the contexts
* running and on the runqueue.
*/
void show_ctx_info_all(void)
{
int i, j, cnt;
long prio_size, prio_runq_off, ctx_rq_off, jump, offset, ctxs_size;
char *u_spu_prio;
ulong spu_prio_addr, k_spu_prio, kvaddr, uvaddr, spu_addr, ctx_addr;
ulong *ctxs;
ulong list_head[2];
struct list_data list_data, *ld;
/* Walking SPUs */
for (i = 0; i < NR_SPUS; i++) {
spu_addr = get_spu_addr(spu[i]) + CBE_OFFSET(spu, ctx);
readmem(spu_addr, KVADDR, &ctx_addr, sizeof(ctx_addr),
"show_ctx_info_all", FAULT_ON_ERROR);
if (ctx_addr)
show_ctx_info(ctx_addr);
}
/* Walking SPU runqueue */
if (symbol_exists("spu_prio")) {
spu_prio_addr = symbol_value("spu_prio");
readmem(spu_prio_addr, KVADDR, &k_spu_prio, sizeof(k_spu_prio),
"runq_array", FAULT_ON_ERROR);
}
else
error(FATAL, "Could not get SPU run queue data.\n");
jump = CBE_SIZE(list_head);
prio_runq_off = CBE_OFFSET(spu_prio_array, runq);
ctx_rq_off = CBE_OFFSET(spu_context, rq);
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
offset = prio_runq_off + i * jump;
kvaddr = k_spu_prio + offset;
uvaddr = (ulong)u_spu_prio + offset;
BCOPY((char *)uvaddr, (char *)&list_head[0], sizeof(ulong)*2);
if ((list_head[0] == kvaddr) && (list_head[1] == kvaddr))
continue;
ld = &list_data;
BZERO(ld, sizeof(struct list_data));
ld->start = list_head[0];
ld->list_head_offset = ctx_rq_off;
ld->flags |= RETURN_ON_LIST_ERROR;
ld->end = kvaddr;
hq_open();
cnt = do_list(ld);
if (cnt == -1) {
hq_close();
FREEBUF(u_spu_prio);
error(FATAL, "Couldn't walk the list.\n");
}
ctxs_size = cnt * sizeof(ulong);
ctxs = (ulong *)GETBUF(ctxs_size);
BZERO(ctxs, ctxs_size);
cnt = retrieve_list(ctxs, cnt);
hq_close();
for (j = 0; j < cnt; j++)
show_ctx_info(ctxs[j]);
FREEBUF(ctxs);
}
FREEBUF(u_spu_prio);
}
/*
* Tries to discover the meaning of string and to find the referred context
*/
int str_to_spuctx(char *string, ulong *value, ulong *spu_ctx)
{
char *s, *u_spu_prio;
ulong dvalue, hvalue, addr, ctx;
ulong k_spu_prio, spu_prio_addr, kvaddr, uvaddr;
int type, pid, i, j, cnt;
long prio_size, prio_runq_off, ctx_rq_off, jump, offset, ctxs_size;
ulong *ctxs;
ulong list_head[2];
struct list_data list_data, *ld;
if (string == NULL) {
error(INFO, "%s: received NULL string.\n", __FUNCTION__);
return STR_SPU_INVALID;
}
s = string;
dvalue = hvalue = BADADDR;
if (decimal(s, 0))
dvalue = dtol(s, RETURN_ON_ERROR, NULL);
if (hexadecimal(s, 0)) {
if (STRNEQ(s, "0x") || STRNEQ(s, "0X"))
s += 2;
if (strlen(s) <= MAX_HEXADDR_STRLEN)
hvalue = htol(s, RETURN_ON_ERROR, NULL);
}
type = STR_SPU_INVALID;
if (dvalue != BADADDR) {
/* Testing for SPU ID */
if ((dvalue >= 0) && (dvalue < NR_SPUS)) {
addr = get_spu_addr(spu[dvalue]) + CBE_OFFSET(spu, ctx);
readmem(addr, KVADDR, &ctx, sizeof(ctx),
"str_to_spuctx ID", FAULT_ON_ERROR);
type = STR_SPU_ID;
*value = dvalue;
*spu_ctx = ctx;
return type;
}
else {
/* Testing for PID */
for (i = 0; i < NR_SPUS; i++) {
addr = get_spu_addr(spu[i]) +
CBE_OFFSET(spu, pid);
readmem(addr, KVADDR, &pid, sizeof(pid),
"str_to_spuctx PID", FAULT_ON_ERROR);
if (dvalue == pid) {
addr = get_spu_addr(spu[i]) +
CBE_OFFSET(spu, ctx);
readmem(addr, KVADDR, &ctx, sizeof(ctx),
"str_to_spuctx PID ctx",
FAULT_ON_ERROR);
type = STR_SPU_PID;
*value = dvalue;
*spu_ctx = ctx;
return type;
}
}
}
}
if (hvalue != BADADDR) {
/* Testing for spuctx address on SPUs */
for (i = 0; i < NR_SPUS; i++) {
addr = get_spu_addr(spu[i]) + CBE_OFFSET(spu, ctx);
readmem(addr, KVADDR, &ctx, sizeof(ctx),
"str_to_spuctx CTX", FAULT_ON_ERROR);
if (hvalue == ctx) {
type = STR_SPU_CTX_ADDR;
*value = hvalue;
*spu_ctx = ctx;
return type;
}
}
/* Testing for spuctx address on SPU runqueue */
if (symbol_exists("spu_prio")) {
spu_prio_addr = symbol_value("spu_prio");
readmem(spu_prio_addr, KVADDR, &k_spu_prio,
sizeof(k_spu_prio), "runq_array", FAULT_ON_ERROR);
}
else
error(FATAL, "Could not get SPU run queue data.\n");
jump = CBE_SIZE(list_head);
prio_runq_off = CBE_OFFSET(spu_prio_array, runq);
ctx_rq_off = CBE_OFFSET(spu_context, rq);
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size,
"get_runq_ctxs", FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
offset = prio_runq_off + i * jump;
kvaddr = k_spu_prio + offset;
uvaddr = (ulong)u_spu_prio + offset;
BCOPY((char *)uvaddr, (char *)&list_head[0], sizeof(ulong)*2);
if ((list_head[0] == kvaddr) && (list_head[1] == kvaddr))
continue;
ld = &list_data;
BZERO(ld, sizeof(struct list_data));
ld->start = list_head[0];
ld->list_head_offset = ctx_rq_off;
ld->flags |= RETURN_ON_LIST_ERROR;
ld->end = kvaddr;
hq_open();
cnt = do_list(ld);
if (cnt == -1) {
hq_close();
FREEBUF(u_spu_prio);
error(FATAL, "Couldn't walk the list.\n");
}
ctxs_size = cnt * sizeof(ulong);
ctxs = (ulong *)GETBUF(ctxs_size);
BZERO(ctxs, ctxs_size);
cnt = retrieve_list(ctxs, cnt);
hq_close();
for (j = 0; j < cnt; j++)
if (hvalue == ctxs[j]) {
type = STR_SPU_CTX_ADDR;
*value = hvalue;
*spu_ctx = ctxs[j];
FREEBUF(u_spu_prio);
FREEBUF(ctxs);
return type;
}
FREEBUF(ctxs);
}
FREEBUF(u_spu_prio);
}
return type;
}
/*
* spuctx command stands for "spu context" and shows the context fields
* for the spu or respective struct address passed as an argument
*/
void cmd_spuctx()
{
int i, c, cnt;
ulong value, ctx;
ulong *ctxlist;
while ((c = getopt(argcnt, args, "")) != EOF) {
switch(c)
{
default:
argerrs++;
break;
}
}
if (argerrs)
cmd_usage(pc->curcmd, SYNOPSIS);
if (!args[optind]) {
show_ctx_info_all();
return;
}
cnt = 0;
ctxlist = (ulong *)GETBUF((MAXARGS+NR_CPUS)*sizeof(ctx));
while (args[optind]) {
if (IS_A_NUMBER(args[optind])) {
switch (str_to_spuctx(args[optind], &value, &ctx))
{
case STR_SPU_ID:
case STR_SPU_PID:
case STR_SPU_CTX_ADDR:
ctxlist[cnt++] = ctx;
break;
case STR_SPU_INVALID:
error(INFO, "Invalid SPU reference: %s\n",
args[optind]);
break;
}
}
else
error(INFO, "Invalid SPU reference: %s\n",
args[optind]);
optind++;
}
if (cnt == 0)
error(INFO, "No valid ID, PID or context address.\n");
else
for (i = 0; i < cnt; i++)
show_ctx_info(ctxlist[i]);
FREEBUF(ctxlist);
}
/*****************************************************************************
* SPUS COMMAND
*/
void print_spu_header(ulong spu_info)
{
int id, pid, size, state;
uint status;
ulong ctx_addr, spu_addr;
char *spu_data;
const char *state_str;
if (spu_info) {
readmem(spu_info + CBE_OFFSET(crash_spu_info,
saved_spu_status_R), KVADDR, &status, sizeof(status),
"print_spu_header: get status", FAULT_ON_ERROR);
size = CBE_SIZE(spu);
spu_data = GETBUF(size);
spu_addr = get_spu_addr(spu_info);
readmem(spu_addr, KVADDR, spu_data, size, "SPU struct",
FAULT_ON_ERROR);
id = *(int *)(spu_data + CBE_OFFSET(spu, number));
pid = *(int *)(spu_data + CBE_OFFSET(spu, pid));
ctx_addr = *(ulong *)(spu_data + CBE_OFFSET(spu, ctx));
if (ctx_addr) {
readmem(ctx_addr + CBE_OFFSET(spu_context, state),
KVADDR, &state, sizeof(state),
"print_spu_header get ctxstate", FAULT_ON_ERROR);
switch (state) {
case 0: /* SPU_STATE_RUNNABLE */
state_str = "RUNNABLE";
break;
case 1: /* SPU_STATE_SAVED */
state_str = " SAVED ";
break;
default:
state_str = "UNKNOWN ";
}
}
else {
state_str = " - ";
}
fprintf(fp, "%2d %16lx %s %16lx %s %5d\n", id,
spu_addr,
status % 2 ? "RUNNING" : (ctx_addr ? "STOPPED" : " IDLE "),
ctx_addr, state_str, pid);
FREEBUF(spu_data);
}
}
void print_node_header(int node)
{
fprintf(fp, "\n");
fprintf(fp, "NODE %i:\n", node);
fprintf(fp, "ID SPUADDR SPUSTATUS CTXADDR \
CTXSTATE PID \n");
}
void show_spus()
{
int i, j, nr_cpus, show_header, node;
ulong spu_addr, addr;
long offset;
nr_cpus = kt->kernel_NR_CPUS ? kt->kernel_NR_CPUS : NR_CPUS;
for (i = 0; i < nr_cpus; i++) {
show_header = TRUE;
for (j = 0; j < NR_SPUS; j++) {
addr = spu[j] + CBE_OFFSET(crash_spu_info, spu);
readmem(addr, KVADDR, &spu_addr, sizeof(spu_addr),
"show_spus spu_addr", FAULT_ON_ERROR);
offset = CBE_OFFSET(spu, node);
if (offset == -1)
error(FATAL, "Couldn't get spu.node offset.\n");
spu_addr += offset;
readmem(spu_addr, KVADDR, &node, sizeof(node),
"show_spus node", FAULT_ON_ERROR);
if (node == i) {
if (show_header) {
print_node_header(node);
show_header = FALSE;
}
print_spu_header(spu[j]);
}
}
}
}
/*
* spus stands for "spu state" and shows what contexts are running in what
* SPU.
*/
void cmd_spus()
{
int c;
while ((c = getopt(argcnt, args, "")) != EOF) {
switch(c)
{
default:
argerrs++;
break;
}
}
if (argerrs || args[optind])
cmd_usage(pc->curcmd, SYNOPSIS);
else
show_spus();
}
/*****************************************************************************
* SPURQ COMMAND
*/
/*
* Prints the addresses of SPU contexts on the SPU runqueue.
*/
void dump_spu_runq(ulong k_spu_prio)
{
int i, cnt;
long prio_size, prio_runq_off, ctx_rq_off, jump, offset;
char *u_spu_prio;
ulong kvaddr, uvaddr;
ulong list_head[2];
struct list_data list_data, *ld;
prio_runq_off = CBE_OFFSET(spu_prio_array, runq);
jump = CBE_SIZE(list_head);
ctx_rq_off = CBE_OFFSET(spu_context, rq);
prio_size = CBE_SIZE(spu_prio_array);
u_spu_prio = (char *)GETBUF(prio_size);
readmem(k_spu_prio, KVADDR, u_spu_prio, prio_size, "get_runq_ctxs",
FAULT_ON_ERROR);
for (i = 0; i < MAX_PRIO; i++) {
offset = prio_runq_off + (i * jump);
kvaddr = k_spu_prio + offset;
uvaddr = (ulong)u_spu_prio + offset;
BCOPY((char *)uvaddr, (char *)&list_head[0], sizeof(ulong)*2);
if ((list_head[0] == kvaddr) && (list_head[1] == kvaddr))
continue;
fprintf(fp, "PRIO[%i]:\n", i);
ld = &list_data;
BZERO(ld, sizeof(struct list_data));
ld->start = list_head[0];
ld->list_head_offset = ctx_rq_off;
ld->flags |= VERBOSE;
ld->end = kvaddr;
hq_open();
cnt = do_list(ld);
hq_close();
if (cnt == -1) {
FREEBUF(u_spu_prio);
error(FATAL, "Couldn't walk runqueue[%i].\n", i);
}
}
FREEBUF(u_spu_prio);
}
/*
* spurq stands for "spu run queue" and shows info about the contexts
* that are on the SPU run queue
*/
void cmd_spurq()
{
int c;
ulong spu_prio_addr, spu_prio;
long size;
while ((c = getopt(argcnt, args, "")) != EOF) {
switch(c)
{
default:
argerrs++;
break;
}
}
if (argerrs || args[optind])
cmd_usage(pc->curcmd, SYNOPSIS);
else {
if (symbol_exists("spu_prio")) {
spu_prio_addr = symbol_value("spu_prio");
readmem(spu_prio_addr, KVADDR, &spu_prio,
sizeof(spu_prio), "runq_array", FAULT_ON_ERROR);
dump_spu_runq(spu_prio);
} else
error(FATAL, "Could not get SPU run queue data.\n");
}
}
/**********************************************************************************
* HELP TEXTS
*/
char *help_spuctx[] = {
SPUCTX_CMD_NAME,
"shows complete info about a SPU context",
"[ID | PID | CTXADDR] ...",
" This command shows the fields of spu and spu_context structs for a ",
"SPU context, including debug info specially saved by kdump after a ",
"crash.",
" By default, it shows info about all the contexts created by the ",
"system, including ones in the runqueue. To specify the contexts of ",
"interest, the PID of the controller task, ID of the SPU which the ",
"context is bound to or the address of spu_context struct can be used ",
"as parameters.",
"\nEXAMPLES",
"\n Show info about contexts bound to SPUs 0 and 7, and the one ",
"controlled by thread whose PID is 1524:",
"\n crash> spuctx 0 7 1524",
"\n Dumping context fields for spu_context c00000003dcbdd80:",
" state = 0",
" prio = 120",
" local_store = 0xc000000039055840",
" rq = 0xc00000003dcbe720",
" node = 0",
" number = 0",
" pid = 1524",
" name = spe",
" slb_replace = 0",
" mm = 0xc0000000005dd700",
" timestamp = 0x10000566f",
" class_0_pending = 0",
" problem = 0xd000080080210000",
" priv2 = 0xd000080080230000",
" flags = 0",
" saved_mfc_sr1_RW = 59",
" saved_mfc_dar = 14987979559889612800",
" saved_mfc_dsisr = 0",
" saved_spu_runcntl_RW = 1",
" saved_spu_status_R = 1",
" saved_spu_npc_RW = 0",
"\n Dumping context fields for spu_context c00000003dec4e80:",
" state = 0",
" prio = 120",
" local_store = 0xc00000003b1cea40",
" rq = 0xc00000003dec5820",
" node = 0",
" number = 7",
" pid = 1538",
" name = spe",
" slb_replace = 0",
" mm = 0xc0000000005d2b80",
" timestamp = 0x10000566f",
" class_0_pending = 0",
" problem = 0xd000080080600000",
" priv2 = 0xd000080080620000",
" flags = 0",
" saved_mfc_sr1_RW = 59",
" saved_mfc_dar = 14987979559896297472",
" saved_mfc_dsisr = 0",
" saved_spu_runcntl_RW = 1",
" saved_spu_status_R = 1",
" saved_spu_npc_RW = 0",
"\n Dumping context fields for spu_context c00000003dcbdd80:",
" state = 0",
" prio = 120",
" local_store = 0xc000000039055840",
" rq = 0xc00000003dcbe720",
" node = 0",
" number = 0",
" pid = 1524",
" name = spe",
" slb_replace = 0",
" mm = 0xc0000000005dd700",
" timestamp = 0x10000566f",
" class_0_pending = 0",
" problem = 0xd000080080210000",
" priv2 = 0xd000080080230000",
" flags = 0",
" saved_mfc_sr1_RW = 59",
" saved_mfc_dar = 14987979559889612800",
" saved_mfc_dsisr = 0",
" saved_spu_runcntl_RW = 1",
" saved_spu_status_R = 1",
" saved_spu_npc_RW = 0",
"\n Show info about the context whose struct spu_context address is ",
"0xc00000003dcbed80:\n",
"crash> spuctx 0x00000003dcbed80",
" ...",
NULL
};
char *help_spus[] = {
SPUS_CMD_NAME,
"shows how contexts are scheduled in the SPUs",
" ",
" This command shows how the contexts are scheduled in the SPUs of ",
"each node. It provides info about the spu address, SPU status, the ",
"spu_context address, context state and spu_context addresses and the ",
"PID of controller thread for each SPU.",
"\nEXAMPLE",
" Show SPU contexts:",
"\n crash> spus",
" NODE 0:",
" ID SPUADDR SPUSTATUS CTXADDR CTXSTATE PID ",
" 0 c000000001fac880 RUNNING c00000003dcbdd80 RUNNABLE 1524",
" 1 c000000001faca80 RUNNING c00000003bf34e00 RUNNABLE 1528",
" 2 c000000001facc80 RUNNING c00000003bf30e00 RUNNABLE 1525",
" 3 c000000001face80 RUNNING c000000039421d00 RUNNABLE 1533",
" 4 c00000003ee29080 RUNNING c00000003dec3e80 RUNNABLE 1534",
" 5 c00000003ee28e80 RUNNING c00000003bf32e00 RUNNABLE 1526",
" 6 c00000003ee28c80 STOPPED c000000039e5e700 SAVED 1522",
" 7 c00000003ee2e080 RUNNING c00000003dec4e80 RUNNABLE 1538",
"\n NODE 1:",
" ID SPUADDR SPUSTATUS CTXADDR CTXSTATE PID ",
" 8 c00000003ee2de80 RUNNING c00000003dcbed80 RUNNABLE 1529",
" 9 c00000003ee2dc80 RUNNING c00000003bf39e00 RUNNABLE 1535",
" 10 c00000003ee2da80 RUNNING c00000003bf3be00 RUNNABLE 1521",
" 11 c000000001fad080 RUNNING c000000039420d00 RUNNABLE 1532",
" 12 c000000001fad280 RUNNING c00000003bf3ee00 RUNNABLE 1536",
" 13 c000000001fad480 RUNNING c00000003dec2e80 RUNNABLE 1539",
" 14 c000000001fad680 RUNNING c00000003bf3ce00 RUNNABLE 1537",
" 15 c000000001fad880 RUNNING c00000003dec6e80 RUNNABLE 1540",
NULL
};
char *help_spurq[] = {
SPURQ_CMD_NAME,
"shows contexts on the SPU runqueue",
" ",
" This command shows info about all contexts waiting for execution ",
"in the SPU runqueue. No parameter is needed.",
"\nEXAMPLE",
" Show SPU runqueue:",
"\n crash> spurq",
" PRIO[120]:",
" c000000000fd7380",
" c00000003bf31e00",
" PRIO[125]:",
" c000000039422d00",
" c00000000181eb80",
NULL
};
16 years, 3 months
crash version 4.0-7.2 is available
by Dave Anderson
- Fix for initialization-time failure when running against 2.6.27
x86_64 kernels, which indicate "crash: cannot resolve: end_pfn".
The patch sets the new 2.6.27 x86_64 PAGE_OFFSET value, handles
the change in the x86_64 "_cpu_pda" variable declaration, and
distinguishes paravirtual "pv_ops" kernels from traditional xen
kernels. (oomichi(a)mxs.nes.nec.co.jp, anderson(a)redhat.com)
- When an improper structure member offset or structure size is
attempted, a partial crash backtrace is displayed in the ensuing
error message. However, if the crash binary was stripped, it would
show "/usr/bin/nm: /tmp/crash: no symbols" instead of the address
and name of the symbol. This has been fixed to work with stripped
binaries if the crash symbol can be found in the crash binary; if
the crash symbol cannot be found, such as for static text symbols,
it will just display its address and "(undetermined)".
(bwalle(a)suse.de)
- crash.spec file addition: Requires: binutils
(anderson(a)redhat.com)
- Fix for LKCD kerntypes debuginfo files to use "node_states" when
"node_online_map" is not in use. (cpw(a)sgi.com)
- Implement support for s390/s390x CONFIG_SPARSEMEM kernels. Without
the patch, crash sessions would fail during initialization with the
error message: "crash: CONFIG_SPARSEMEM kernels not supported for
this architecture". (holzheu(a)linux.vnet.ibm.com)
- Fix for "kmem -[sS]" when running against 2.6.27 CONFIG_SLUB kernels,
in which the kmem_cache.objects and .order members were replaced by
a kmem_cache_order_objects structure. Without the patch, the command
would fail with the error message: "kmem: "invalid structure member
offset: kmem_cache_objects". The fix also recognizes and supports
potentially variable slab sizes as introduced by the kernel patch.
(anderson(a)redhat.com)
- Increased the maximum number of SIAL commands from 100 to 200.
(cpw(a)sgi.com)
Download from: http://people.redhat.com/anderson
16 years, 3 months
[PATCH] help screen indication of extension commands
by Cliff Wickman
From: Cliff Wickman <cpw(a)sgi.com>
It would be nice if the help screen differentiated between built-in
commands and extension commands.
Particularly in the case of sial extensions, as you can edit them
in your crash session. If you know that the command is sial you
can fix or enhance it if necessary.
This patch implements that by changing the pc->cmdlist from a list
of name pointers to a list of struct command_table_entry pointers.
Then the help screen can highlight those containing a new flag:
if (cp->flags & EXTENSION)
Diffed against crash-4.0-4.7
Signed-off-by: Cliff Wickman <cpw(a)sgi.com>
---
defs.h | 4 ++-
help.c | 66 ++++++++++++++++++++++++++++++++++++++++-------------------------
2 files changed, 44 insertions(+), 26 deletions(-)
Index: crash-4.0-4.7/help.c
===================================================================
--- crash-4.0-4.7.orig/help.c
+++ crash-4.0-4.7/help.c
@@ -154,19 +154,23 @@ help_init(void)
for (cp = ext->command_table; cp->name; cp++) {
if (!(cp->flags & (CLEANUP|HIDDEN_COMMAND)))
pc->ncmds++;
+ cp->flags |= EXTENSION;
}
}
if (!pc->cmdlist) {
pc->cmdlistsz = pc->ncmds;
- if ((pc->cmdlist = (char **)
- malloc(sizeof(char *) * pc->cmdlistsz)) == NULL)
+ if ((pc->cmdlist = (struct command_table_entry **)
+ malloc(sizeof(struct command_table_entry *) *
+ pc->cmdlistsz)) == NULL)
error(FATAL,
"cannot malloc command list space\n");
} else if (pc->ncmds > pc->cmdlistsz) {
pc->cmdlistsz = pc->ncmds;
- if ((pc->cmdlist = (char **)realloc(pc->cmdlist,
- sizeof(char *) * pc->cmdlistsz)) == NULL)
+ if ((pc->cmdlist = (struct command_table_entry **)
+ realloc(pc->cmdlist,
+ sizeof(struct command_table_entry *) *
+ pc->cmdlistsz)) == NULL)
error(FATAL,
"cannot realloc command list space\n");
}
@@ -190,13 +194,13 @@ reshuffle_cmdlist(void)
for (cnt = 0, cp = pc->cmd_table; cp->name; cp++) {
if (!(cp->flags & HIDDEN_COMMAND))
- pc->cmdlist[cnt++] = cp->name;
+ pc->cmdlist[cnt++] = cp;
}
for (ext = extension_table; ext; ext = ext->next) {
for (cp = ext->command_table; cp->name; cp++) {
if (!(cp->flags & (CLEANUP|HIDDEN_COMMAND)))
- pc->cmdlist[cnt++] = cp->name;
+ pc->cmdlist[cnt++] = cp;
}
}
@@ -212,19 +216,21 @@ reshuffle_cmdlist(void)
* The help list is in alphabetical order, with exception of the "q" command,
* which has historically always been the last command in the list.
*/
-
+/*
+ * the pointers are pointers to struct command_table_entry
+ */
static int
-sort_command_name(const void *name1, const void *name2)
+sort_command_name(const void *struct1, const void *struct2)
{
- char **s1, **s2;
+ char *s1, *s2;
- s1 = (char **)name1;
- s2 = (char **)name2;
+ s1 = (*(struct command_table_entry **)struct1)->name;
+ s2 = (*(struct command_table_entry **)struct2)->name;
- if (STREQ(*s1, "q"))
+ if (STREQ(s1, "q"))
return 1;
- return strcmp(*s1, *s2);
+ return strcmp(s1, s2);
}
@@ -408,8 +414,9 @@ cmd_help(void)
void
display_help_screen(char *indent)
{
- int i, j, rows;
+ int i, j, rows, ext_count=0;
char **namep;
+ struct command_table_entry **cpp, *cp;
help_init();
@@ -418,15 +425,23 @@ display_help_screen(char *indent)
rows = (pc->ncmds + (HELP_COLUMNS-1)) / HELP_COLUMNS;
for (i = 0; i < rows; i++) {
- namep = &pc->cmdlist[i];
+ cpp = &(pc->cmdlist[i]);
for (j = 0; j < HELP_COLUMNS; j++) {
- fprintf(fp,"%-15s", *namep);
- namep += rows;
- if ((namep - pc->cmdlist) >= pc->ncmds)
+ cp = *cpp;
+ if (cp->flags & EXTENSION) {
+ fprintf(fp,"+%-15s", cp->name);
+ ext_count++;
+ } else {
+ fprintf(fp," %-15s", cp->name);
+ }
+ cpp += rows;
+ if ((cpp - pc->cmdlist) >= pc->ncmds)
break;
}
fprintf(fp,"\n%s", indent);
}
+ if (ext_count)
+ fprintf(fp,"+ denotes an extension command\n%s", indent);
fprintf(fp, "\n%s%s version: %-6s gdb version: %s\n", indent,
pc->program_name, pc->program_version, pc->gdb_version);
@@ -454,17 +469,16 @@ static void
display_commands(void)
{
int i, j, rows;
- char **namep;
+ struct command_table_entry **cp;
help_init();
rows = (pc->ncmds + (HELP_COLUMNS-1)) / HELP_COLUMNS;
for (i = 0; i < rows; i++) {
- namep = &pc->cmdlist[i];
+ cp = &pc->cmdlist[i];
for (j = 0; j < HELP_COLUMNS; j++) {
- fprintf(fp,"%s\n", *namep);
- namep += rows;
- if ((namep - pc->cmdlist) >= pc->ncmds) {
+ cp += rows;
+ if ((cp - pc->cmdlist) >= pc->ncmds) {
fprintf(fp, "BREAK\n");
break;
}
@@ -4957,8 +4971,10 @@ cmd_usage(char *cmd, int helpflag)
display_input_info();
display_output_info();
help_init();
- for (i = 0; i < pc->ncmds; i++)
- cmd_usage(pc->cmdlist[i], COMPLETE_HELP);
+ for (i = 0; i < pc->ncmds; i++) {
+ cp = *(&(pc->cmdlist[i]));
+ cmd_usage(cp->name, COMPLETE_HELP);
+ }
display_warranty_info();
display_copying_info();
goto done_usage;
Index: crash-4.0-4.7/defs.h
===================================================================
--- crash-4.0-4.7.orig/defs.h
+++ crash-4.0-4.7/defs.h
@@ -383,7 +383,8 @@ struct program_context {
struct termios termios_orig; /* non-raw settings */
struct termios termios_raw; /* while gathering command input */
int ncmds; /* number of commands in menu */
- char **cmdlist; /* current list of available commands */
+ struct command_table_entry **cmdlist;
+ /* current list of available commands */
int cmdlistsz; /* space available in cmdlist */
unsigned output_radix; /* current gdb output_radix */
void *sbrk; /* current sbrk value */
@@ -409,6 +410,7 @@ struct command_table_entry {
#define REFRESH_TASK_TABLE (0x1) /* command_table_entry flags */
#define HIDDEN_COMMAND (0x2)
#define CLEANUP (0x4) /* for extensions only */
+#define EXTENSION (0x8) /* is an extension */
/*
* A linked list of extension table structures keeps track of the current
16 years, 3 months