Re: [Crash-utility] Kernel Crash Analysis on Android
by Shankar, AmarX
Hi Dave,
Thanks for your info regarding kexec tool.
I am unable to download kexec from below link.
http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-too...
It says HTTP 404 Page Not Found.
Could you please guide me on this?
Thanks & Regards,
Amar Shankar
> On Wed, Mar 21, 2012 at 06:00:00PM +0000, Shankar, AmarX wrote:
>
> > I want to do kernel crash Analysis on Android Merrifield Target.
> >
> > Could someone please help me how to do it?
>
> Merrifield is pretty much similar than Medfield, e.g it has x86 core. So I
> guess you can follow the instructions how to setup kdump on x86 (see
> Documentation/kdump/kdump.txt) unless you already have that configured.
>
> crash should support this directly presuming you have vmlinux/vmcore files to
> feed it. You can configure crash to support x86 on x86_64 host by running:
>
> % make target=X86
> & make
>
> (or something along those lines).
Right -- just the first make command will suffice, i.e., when running
on an x86_64 host:
$ wget http://people.redhat.com/anderson/crash-6.0.4.tar.gz
$ tar xzf crash-6.0.4.tar.gz
...
$ cd crash-6.0.4
$ make target=X86
...
$ ./crash <path-to>/vmlinux <path-to>/vmcore
Dave
From: Shankar, AmarX
Sent: Wednesday, March 21, 2012 11:30 PM
To: 'crash-utility(a)redhat.com'
Subject: Kernel Crash Analysis on Android
Hi,
I want to do kernel crash Analysis on Android Merrifield Target.
Could someone please help me how to do it?
Thanks & Regards,
Amar Shankar
1 year, 1 month
[PATCH] kmem, snap: iomem/ioport display and vmcore snapshot support
by HATAYAMA Daisuke
Some days ago I was in a situation that I had to convert vmcore in
kvmdump format into ELF since some extension module we have locally
can be used only on relatively old crash utility, around version 4,
but such old crash utility cannot handle kvmdump format.
To do the conversion in handy, I used snap command with some modifications
so that it tries to use iomem information in vmcore instead of host's
/proc/iomem. This patch is its cleaned-up version.
In this development, I naturally got down to also making an interface
for an access to resource objects, and so together with the snap
command's patch, I also extended kmem command for iomem/ioport
support. Actually:
kmem -r displays /proc/iomem
crash> kmem -r
00000000-0000ffff : reserved
00010000-0009dbff : System RAM
0009dc00-0009ffff : reserved
000c0000-000c7fff : Video ROM
...
and kmem -R displays /proc/ioport
crash> kmem -R
0000-001f : dma1
0020-0021 : pic1
0040-0043 : timer0
0050-0053 : timer1
...
Looking into old version of kernel source code back, resource structure
has been unchanged since linux-2.4.0. I borrowed the way of walking on
resouce tree in this patch from the lastest v3.3-rc series, but I
guess the logic is also applicable to old kernels. I expect Dave's
regression testsuite.
Also, there would be another command more sutable for iomem/ioport.
If necessay, I'll repost the patch.
---
HATAYAMA Daisuke (4):
Add vmcore snapshot support
Add kmem -r and -R options
Add dump iomem/ioport functions; a helper for resource objects
Add a helper function for iterating resource objects
defs.h | 9 ++++
extensions/snap.c | 54 ++++++++++++++++++++++-
help.c | 2 +
memory.c | 122 +++++++++++++++++++++++++++++++++++++++++++++++++++--
4 files changed, 180 insertions(+), 7 deletions(-)
--
Thanks.
HATAYAMA Daisuke
1 year, 1 month
Re: [Crash-utility] question about phys_base
by Dave Anderson
----- Original Message -----
> >
> > OK, so then I don't understand what you mean by "may be the same"?
> >
> > You didn't answer my original question, but if I understand you correctly,
> > it would be impossible for the qemu host to create a PT_LOAD segment that
> > describes an x86_64 guest's __START_KERNEL_map region, because the host
> > doesn't know that what kind of kernel the guest is running.
>
> Yes. Even if the guest is linux, it is still impossible to do it. Because
> the guest maybe in the second kernel.
>
> qemu-dump walks all guest's page table and collect virtual address and
> physical address mapping. If the page is not used by guest, the virtual is set
> to 0. I create PT_LOAD according to such mapping. So if the guest is linux,
> there may be a PT_LOAD segment that describes __START_KERNEL_map region.
> But the information stored in PT_LOAD maybe for the second kernel. If crash
> uses it, crash will see the second kernel, not the first kernel.
Just to be clear -- what do you mean by the "second" kernel? Do you
mean that a guest kernel crashed guest, and did a kdump operation,
and that second kdump kernel failed somehow, and now you're trying
to do a "virsh dump" on the kdump kernel?
Dave
1 year, 1 month
question about phys_base
by Wen Congyang
Hi, Dave
I am implementing a new dump command in the qemu. The vmcore's
format is elf(like kdump). And I try to provide phys_base in
the PT_LOAD. But if the os uses the first vcpu do kdump, the
value of phys_base is wrong.
I find a function x86_64_virt_phys_base() in crash's code.
Is it OK to call this function first? If the function
successes, we do not calculate phys_base according to PT_LOAD.
Thanks
Wen Congyang
1 year, 1 month
[PATCH] runq: search current task's runqueue explicitly
by HATAYAMA Daisuke
Currently, runq sub-command doesn't consider CFS runqueue's current
task removed from CFS runqueue. Due to this, the remaining CFS
runqueus that follow the current task's is not displayed. This patch
fixes this by making runq sub-command search current task's runqueue
explicitly.
Note that CFS runqueue exists for each task group, and so does CFS
runqueue's current task, and the above search needs to be done
recursively.
Test
====
On vmcore I made 7 task groups:
root group --- A --- AA --- AAA
+ +- AAB
|
+- AB --- ABA
+- ABB
and then I ran three CPU bound tasks, which is exactly the same as
int main(void) { for (;;) continue; return 0; }
for each task group, including root group; so total 24 tasks. For
readability, I annotated each task name with its belonging group name.
For example, loop.ABA belongs to task group ABA.
Look at CPU0 collumn below. [before] lacks 8 tasks and [after]
successfully shows all tasks on the runqueue, which is identical to
the result of [sched debug] that is expected to ouput correct result.
I'll send this vmcore later.
[before]
crash> runq | cat
CPU 0 RUNQUEUE: ffff88000a215f80
CURRENT: PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
RT PRIO_ARRAY: ffff88000a216098
[no tasks queued]
CFS RB_ROOT: ffff88000a216010
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
<cut>
[after]
crash_fix> runq
CPU 0 RUNQUEUE: ffff88000a215f80
CURRENT: PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
RT PRIO_ARRAY: ffff88000a216098
[no tasks queued]
CFS RB_ROOT: ffff88000a216010
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
[120] PID: 28271 TASK: ffff8800787a8b40 COMMAND: "loop.ABB"
[120] PID: 28272 TASK: ffff880037afd580 COMMAND: "loop.ABB"
[120] PID: 28245 TASK: ffff8800785e8b00 COMMAND: "loop.AB"
[120] PID: 28246 TASK: ffff880078628ac0 COMMAND: "loop.AB"
[120] PID: 28241 TASK: ffff880078616b40 COMMAND: "loop.AA"
[120] PID: 28239 TASK: ffff8800785774c0 COMMAND: "loop.AA"
[120] PID: 28240 TASK: ffff880078617580 COMMAND: "loop.AA"
[120] PID: 28232 TASK: ffff880079b5d4c0 COMMAND: "loop.A"
<cut>
[sched debug]
crash> runq -d
CPU 0
[120] PID: 28232 TASK: ffff880079b5d4c0 COMMAND: "loop.A"
[120] PID: 28239 TASK: ffff8800785774c0 COMMAND: "loop.AA"
[120] PID: 28240 TASK: ffff880078617580 COMMAND: "loop.AA"
[120] PID: 28241 TASK: ffff880078616b40 COMMAND: "loop.AA"
[120] PID: 28245 TASK: ffff8800785e8b00 COMMAND: "loop.AB"
[120] PID: 28246 TASK: ffff880078628ac0 COMMAND: "loop.AB"
[120] PID: 28262 TASK: ffff880037cc40c0 COMMAND: "loop.ABA"
[120] PID: 28263 TASK: ffff880037aaa040 COMMAND: "loop.ABA"
[120] PID: 28271 TASK: ffff8800787a8b40 COMMAND: "loop.ABB"
[120] PID: 28272 TASK: ffff880037afd580 COMMAND: "loop.ABB"
<cut>
Diff stat
=========
defs.h | 1 +
task.c | 37 +++++++++++++++++--------------------
2 files changed, 18 insertions(+), 20 deletions(-)
Thanks.
HATAYAMA, Daisuke
1 year, 1 month
[RFC] makedumpfile, crash: LZO compression support
by HATAYAMA Daisuke
Hello,
This is a RFC patch set that adds LZO compression support to
makedumpfile and crash utility. LZO is as good as in size but by far
better in speed than ZLIB, leading to reducing down time during
generation of crash dump and refiltering.
How to build:
1. Get LZO library, which is provided as lzo-devel package on recent
linux distributions, and is also available on author's website:
http://www.oberhumer.com/opensource/lzo/.
2. Apply the patch set to makedumpfile v1.4.0 and crash v6.0.0.
3. Build both using make. But for crash, do the following now:
$ make CFLAGS="-llzo2"
How to use:
I've newly used -l option for lzo compression in this patch. So for
example, do as follows:
$ makedumpfile -l vmcore dumpfile
$ crash vmlinux dumpfile
Request of configure-like feature for crash utility:
I would like configure-like feature on crash utility for users to
select wheather to add LZO feature actually or not in build-time,
that is: ./configure --enable-lzo or ./configure --disable-lzo.
The reason is that support staff often downloads and installs the
latest version of crash utility on machines where lzo library is not
provided.
Looking at the source code, it looks to me that crash does some kind
of configuration processing in a local manner, around configure.c,
and I guess it's difficult to use autoconf tools directly.
Or is there another better way?
Performance Comparison:
Sample Data
Ideally, I must have measured the performance for many enough
vmcores generated from machines that was actually running, but now
I don't have enough sample vmcores, I couldn't do so. So this
comparison doesn't answer question on I/O time improvement. This
is TODO for now.
Instead, I choosed worst and best cases regarding compression
ratio and speed only. Specifically, the former is /dev/urandom and
the latter is /dev/zero.
I get the sample data of 10MB, 100MB and 1GB by doing like this:
$ dd bs=4096 count=$((1024*1024*1024/4096)) if=/dev/urandom of=urandom.1GB
How to measure
Then I performed compression for each block, 4096 bytes, and
measured total compression time and output size. See attached
mycompress.c.
Result
See attached file result.txt.
Discussion
For both kinds of data, lzo's compression was considerably quicker
than zlib's. Compression ratio is about 37% for urandom data, and
about 8.5% for zero data. Actual situation of physical memory
would be in between the two cases, and so I guess average
compression time ratio is between 37% and 8.5%.
Although beyond the topic of this patch set, we can estimate worst
compression time on more data size since compression is performed
block size wise and the compression time increases
linearly. Estimated worst time on 2TB memory is about 15 hours for
lzo and about 40 hours for zlib. In this case, compressed data
size is larger than the original, so they are really not used,
compression time is fully meaningless. I think compression must be
done in parallel, and I'll post such patch later.
Diffstat
* makedumpfile
diskdump_mod.h | 3 +-
makedumpfile.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++++------
makedumpfile.h | 12 +++++++
3 files changed, 101 insertions(+), 12 deletions(-)
* crash
defs.h | 1 +
diskdump.c | 20 +++++++++++++++++++-
diskdump.h | 3 ++-
3 files changed, 22 insertions(+), 2 deletions(-)
TODO
* evaluation including I/O time using actual vmcores
Thanks.
HATAYAMA, Daisuke
1 year, 1 month
Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
by tachibana@mxm.nes.nec.co.jp
Hi Hatayama-san,
On 2011/06/29 12:12:18 +0900, HATAYAMA Daisuke <d.hatayama(a)jp.fujitsu.com> wrote:
> From: Dave Anderson <anderson(a)redhat.com>
> Subject: Re: [Crash-utility] [RFI] Support Fujitsu's sadump dump format
> Date: Tue, 28 Jun 2011 08:57:42 -0400 (EDT)
>
> >
> >
> > ----- Original Message -----
> >> Fujitsu has stand-alone dump mechanism based on firmware level
> >> functionality, which we call SADUMP, in short.
> >>
> >> We've maintained utility tools internally but now we're thinking that
> >> the best is crash utility and makedumpfile supports the sadump format
> >> for the viewpoint of both portability and maintainability.
> >>
> >> We'll be of course responsible for its maintainance in a continuous
> >> manner. The sadump dump format is very similar to diskdump format and
> >> so kdump (compressed) format, so we estimate patch set would be a
> >> relatively small size.
> >>
> >> Could you tell me whether crash utility and makedumpfile can support
> >> the sadump format? If OK, we'll start to make patchset.
I think it's not bad to support sadump by makedumpfile. However I have
several questions.
- Do you want to use makedumpfile to make an existing file that sadump has
dumped small?
- It isn't possible to support the same form as kdump-compressed format
now, is it?
- When the information that makedumpfile reads from a note of /proc/vmcore
(or a header of kdump-compressed format) is added by an extension of
makedumpfile, do you need to modify sadump?
Thanks
tachibana
> >
> > Sure, yes, the crash utility can always support another dumpfile format.
> >
>
> Thanks. It helps a lot.
>
> > It's unclear to me how similar SADUMP is to diskdump/compressed-kdump.
> > Does your internal version patch diskdump.c, or do you maintain your
> > own "sadump.c"? I ask because if your patchset is at all intrusive,
> > I'd prefer it be kept in its own file, primarily for maintainability,
> > but also because SADUMP is essentially a black-box to anybody outside
> > Fujitsu.
>
> What I meant when I used ``similar'' is both literally and
> logically. The format consists of diskdump header-like header, two
> kinds of bitmaps used for the same purpose as those in diskump format,
> and memory data. They can be handled in common with the existing data
> structure, diskdump_data, non-intrusively, so I hope they are placed
> in diskdump.c.
>
> On the other hand, there's a code to be placed at such specific
> area. sadump is triggered depending on kdump's progress and so
> register values to be contained in vmcore varies according to the
> progress: If crash_notes has been initialized when sadump is
> triggered, sadump packs the register values in crash_notes; if not
> yet, packs registers gathered by firmware. This is sadump specific
> processing, so I think putting it in specific sadump.c file is a
> natural and reasonable choise.
>
> Anyway, I have not made any patch set for this. I'll post a patch set
> when I complete.
>
> Again, thanks a lot for the positive answer.
>
> Thanks.
> HATAYAMA, Daisuke
>
>
> _______________________________________________
> kexec mailing list
> kexec(a)lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
1 year, 1 month
[v2] sbitmapq command
by Sergey Samoylenko
Patch adds new 'sbitmapq' command. This command dumps
the contents of the sbitmap_queue structure and the used
bits in the bitmap. Also, it shows the dump of a structure
array associated with the sbitmap_queue.
v1 -> v2:
- Update the help page (Lianbo)
- Use crash interfaces (offset_table, size_table, GETBUF()
and etc.) for reduce the number of readmem() (Kazu)
Signed-off-by: Sergey Samoylenko <s.samoylenko(a)yadro.com>
---
Makefile | 7 +-
defs.h | 59 +++++
global_data.c | 1 +
help.c | 108 ++++++++
sbitmap.c | 664 ++++++++++++++++++++++++++++++++++++++++++++++++++
5 files changed, 837 insertions(+), 2 deletions(-)
create mode 100644 sbitmap.c
diff --git a/Makefile b/Makefile
index 4fd8b78..a381c5f 100644
--- a/Makefile
+++ b/Makefile
@@ -72,7 +72,7 @@ CFILES=main.c tools.c global_data.c memory.c filesys.c help.c task.c \
xen_hyper.c xen_hyper_command.c xen_hyper_global_data.c \
xen_hyper_dump_tables.c kvmdump.c qemu.c qemu-load.c sadump.c ipcs.c \
ramdump.c vmware_vmss.c vmware_guestdump.c \
- xen_dom0.c kaslr_helper.c
+ xen_dom0.c kaslr_helper.c sbitmap.c
SOURCE_FILES=${CFILES} ${GENERIC_HFILES} ${MCORE_HFILES} \
${REDHAT_CFILES} ${REDHAT_HFILES} ${UNWIND_HFILES} \
@@ -92,7 +92,7 @@ OBJECT_FILES=main.o tools.o global_data.o memory.o filesys.o help.o task.o \
xen_hyper.o xen_hyper_command.o xen_hyper_global_data.o \
xen_hyper_dump_tables.o kvmdump.o qemu.o qemu-load.o sadump.o ipcs.o \
ramdump.o vmware_vmss.o vmware_guestdump.o \
- xen_dom0.o kaslr_helper.o
+ xen_dom0.o kaslr_helper.o sbitmap.o
MEMORY_DRIVER_FILES=memory_driver/Makefile memory_driver/crash.c memory_driver/README
@@ -341,6 +341,9 @@ cmdline.o: ${GENERIC_HFILES} cmdline.c
tools.o: ${GENERIC_HFILES} tools.c
${CC} -c ${CRASH_CFLAGS} tools.c ${WARNING_OPTIONS} ${WARNING_ERROR}
+sbitmap.o: ${GENERIC_HFILES} sbitmap.c
+ ${CC} -c ${CRASH_CFLAGS} sbitmap.c ${WARNING_OPTIONS} ${WARNING_ERROR}
+
global_data.o: ${GENERIC_HFILES} global_data.c
${CC} -c ${CRASH_CFLAGS} global_data.c ${WARNING_OPTIONS} ${WARNING_ERROR}
diff --git a/defs.h b/defs.h
index b63741c..d407025 100644
--- a/defs.h
+++ b/defs.h
@@ -18,6 +18,7 @@
#ifndef GDB_COMMON
+#include <stdbool.h>
#include <stdio.h>
#include <stdarg.h>
#include <stdint.h>
@@ -2146,6 +2147,23 @@ struct offset_table { /* stash of commonly-used offsets */
long wait_queue_entry_private;
long wait_queue_head_head;
long wait_queue_entry_entry;
+ long sbitmap_word_depth;
+ long sbitmap_word_word;
+ long sbitmap_word_cleared;
+ long sbitmap_depth;
+ long sbitmap_shift;
+ long sbitmap_map_nr;
+ long sbitmap_map;
+ long sbitmap_queue_sb;
+ long sbitmap_queue_alloc_hint;
+ long sbitmap_queue_wake_batch;
+ long sbitmap_queue_wake_index;
+ long sbitmap_queue_ws;
+ long sbitmap_queue_ws_active;
+ long sbitmap_queue_round_robin;
+ long sbitmap_queue_min_shallow_depth;
+ long sbq_wait_state_wait_cnt;
+ long sbq_wait_state_wait;
};
struct size_table { /* stash of commonly-used sizes */
@@ -2310,6 +2328,10 @@ struct size_table { /* stash of commonly-used sizes */
long prb_desc;
long wait_queue_entry;
long task_struct_state;
+ long sbitmap_word;
+ long sbitmap;
+ long sbitmap_queue;
+ long sbq_wait_state;
};
struct array_table {
@@ -2436,6 +2458,7 @@ DEF_LOADER(ushort);
DEF_LOADER(short);
typedef void *pointer_t;
DEF_LOADER(pointer_t);
+DEF_LOADER(bool);
#define LOADER(TYPE) load_##TYPE
@@ -2449,6 +2472,7 @@ DEF_LOADER(pointer_t);
#define SHORT(ADDR) LOADER(short) ((char *)(ADDR))
#define UCHAR(ADDR) *((unsigned char *)((char *)(ADDR)))
#define VOID_PTR(ADDR) ((void *) (LOADER(pointer_t) ((char *)(ADDR))))
+#define BOOL(ADDR) LOADER(bool) ((char *)(ADDR)))
#else
@@ -2462,6 +2486,7 @@ DEF_LOADER(pointer_t);
#define SHORT(ADDR) *((short *)((char *)(ADDR)))
#define UCHAR(ADDR) *((unsigned char *)((char *)(ADDR)))
#define VOID_PTR(ADDR) *((void **)((char *)(ADDR)))
+#define BOOL(ADDR) *((bool *)((char *)(ADDR)))
#endif /* NEED_ALIGNED_MEM_ACCESS */
@@ -4962,6 +4987,7 @@ void cmd_mach(void); /* main.c */
void cmd_help(void); /* help.c */
void cmd_test(void); /* test.c */
void cmd_ascii(void); /* tools.c */
+void cmd_sbitmapq(void); /* sbitmap.c */
void cmd_bpf(void); /* bfp.c */
void cmd_set(void); /* tools.c */
void cmd_eval(void); /* tools.c */
@@ -5575,6 +5601,7 @@ extern char *help_rd[];
extern char *help_repeat[];
extern char *help_runq[];
extern char *help_ipcs[];
+extern char *help_sbitmapq[];
extern char *help_search[];
extern char *help_set[];
extern char *help_sig[];
@@ -5844,6 +5871,38 @@ void devdump_info(void *, ulonglong, FILE *);
void ipcs_init(void);
ulong idr_find(ulong, int);
+/*
+ * sbitmap.c
+ */
+/* sbitmap helpers */
+struct sbitmap_context {
+ unsigned depth;
+ unsigned shift;
+ unsigned map_nr;
+ ulong map_addr;
+};
+
+typedef bool (*sbitmap_for_each_fn)(unsigned int idx, void *p);
+
+void sbitmap_for_each_set(const struct sbitmap_context *sc,
+ sbitmap_for_each_fn fn, void *data);
+void sbitmap_context_load(ulong addr, struct sbitmap_context *sc);
+
+/* sbitmap_queue helpers */
+typedef bool (*sbitmapq_for_each_fn)(unsigned int idx, ulong addr, void *p);
+
+struct sbitmapq_ops {
+ /* array params associated with the bitmap */
+ ulong addr;
+ ulong size;
+ /* callback params */
+ sbitmapq_for_each_fn fn;
+ void *p;
+};
+
+void sbitmapq_init(void);
+void sbitmapq_for_each_set(ulong addr, struct sbitmapq_ops *ops);
+
#ifdef ARM
void arm_init(int);
void arm_dump_machdep_table(ulong);
diff --git a/global_data.c b/global_data.c
index a316d1c..f9bb7d0 100644
--- a/global_data.c
+++ b/global_data.c
@@ -105,6 +105,7 @@ struct command_table_entry linux_command_table[] = {
{"rd", cmd_rd, help_rd, MINIMAL},
{"repeat", cmd_repeat, help_repeat, 0},
{"runq", cmd_runq, help_runq, REFRESH_TASK_TABLE},
+ {"sbitmapq", cmd_sbitmapq, help_sbitmapq, 0},
{"search", cmd_search, help_search, 0},
{"set", cmd_set, help_set, REFRESH_TASK_TABLE | MINIMAL},
{"sig", cmd_sig, help_sig, REFRESH_TASK_TABLE},
diff --git a/help.c b/help.c
index 04a7eff..d151f2e 100644
--- a/help.c
+++ b/help.c
@@ -962,6 +962,114 @@ char *help_ascii[] = {
NULL
};
+char *help_sbitmapq[] = {
+"sbitmapq",
+"sbitmap_queue struct contents",
+"[-s struct[.member[,member]] -p address [-v]] address",
+" The command dumps the contents of the sbitmap_queue structure and",
+" the used bits in the bitmap. Also, it shows the dump of a structure",
+" array associated with the sbitmap_queue.",
+"",
+" The arguments are as follows:",
+"",
+" -s struct - name of a C-code structure, that is stored in an array",
+" sssociated with sbitmap_queue structure. Use the",
+" \"struct.member\" format in order to display a particular",
+" member of the structure. -s option requires -p option",
+"",
+" -p address - address of a structure array associated with sbitmap_queue",
+" structure. The set bits in sbitmap are used for the index",
+" in an associated array.",
+"",
+" -x - override default output format with hexadecimal format.",
+"",
+" -d - override default output format with decimal format.",
+"",
+" -v - By default, the sbitmap command shows only a used sbitmap",
+" index and a structure address in the associated array.",
+" This flag says to print a formatted display of the",
+" contents of a structure in an associated array. -v option",
+" requires of -s.",
+"",
+"EXAMPLES",
+"",
+" All examples are shown on the base of Linux Target system whit iSCSI",
+" transport.",
+"",
+" Display the common sbitmap information for target session:",
+"",
+" %s> struct -oh se_session 0xc0000000e118c760 | grep sbitmap_queue",
+" [c0000000e118c808] struct sbitmap_queue sess_tag_pool;",
+" %s>",
+" %s> sbitmapq c0000000e118c808",
+" depth = 136",
+" busy = 4",
+" cleared = 26",
+" bits_per_word = 32",
+" map_nr = 5",
+" alloc_hint = {74, 36, 123, 101}",
+" wake_batch = 8",
+" wake_index = 0",
+" ws_active = 0",
+" ws = {",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" { .wait_cnt = 8, .wait = inactive },",
+" }",
+" round_robin = 0",
+" min_shallow_depth = 4294967295",
+"",
+" 00000000: 0000 0000 0000 0000 0030 0000 0000 0000",
+" 00000010: 00",
+"",
+" Display the addresses of structure are associated with",
+" sbitmap_queue (for iscsi it is 'iscsi_cmd' structure):",
+"",
+" %s> struct se_session 0xc0000000e118c760 | grep sess_cmd_map",
+" sess_cmd_map = 0xc0000000671c0000,",
+" %s>",
+" %s> sbitmapq -s iscsi_cmd -p 0xc0000000671c0000 c0000000e118c808",
+" 76: 0xc0000000671d5600",
+" 77: 0xc0000000671d5a80",
+"",
+
+" Dump of formatted content of structures:",
+"",
+" %s> sbitmapq -s iscsi_cmd -p 0xc0000000671c0000 -v c0000000e118c808",
+" 76 (0xc0000000671d5600):",
+" struct iscsi_cmd {",
+" dataout_timer_flags = 0,",
+" dataout_timeout_retries = 0 '\\000',",
+" error_recovery_count = 0 '\\000',",
+" deferred_i_state = ISTATE_NO_STATE,",
+" i_state = ISTATE_SENT_STATUS,",
+" ...",
+" first_data_sg = 0xc0000000e306b080,",
+" first_data_sg_off = 0,",
+" kmapped_nents = 1,",
+" sense_reason = 0",
+" }",
+" 77 (0xc0000000671d5a80):",
+" struct iscsi_cmd {",
+" dataout_timer_flags = 0,",
+" dataout_timeout_retries = 0 '\\000',",
+" error_recovery_count = 0 '\\000',",
+" deferred_i_state = ISTATE_NO_STATE,",
+" i_state = ISTATE_NEW_CMD,",
+" ...",
+" first_data_sg = 0x0,",
+" first_data_sg_off = 0,",
+" kmapped_nents = 0,",
+" sense_reason = 0",
+" }",
+NULL
+};
+
char *help_quit[] = {
"quit",
"exit this session",
diff --git a/sbitmap.c b/sbitmap.c
new file mode 100644
index 0000000..5343a88
--- /dev/null
+++ b/sbitmap.c
@@ -0,0 +1,664 @@
+/* sbitmap.c - core analysis suite
+ *
+ * Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
+ * Copyright (C) 2002-2020 David Anderson
+ * Copyright (C) 2002-2020 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "defs.h"
+
+#define SBQ_WAIT_QUEUES 8
+
+/* sbitmap_queue struct context */
+struct sbitmap_queue_context {
+ ulong sb_addr;
+ ulong alloc_hint;
+ unsigned int wake_batch;
+ int wake_index;
+ ulong ws_addr;
+ int ws_active;
+ bool round_robin;
+ unsigned int min_shallow_depth;
+
+};
+
+struct sbitmapq_data {
+#define SBITMAPQ_DATA_FLAG_STRUCT_NAME (VERBOSE << 1)
+#define SBITMAPQ_DATA_FLAG_STRUCT_ADDR (VERBOSE << 2)
+#define SBITMAPQ_DATA_FLAG_STRUCT_MEMBER (VERBOSE << 3)
+ ulong flags;
+ int radix;
+ /* sbitmap_queue info */
+ ulong addr;
+ /* data array info */
+ ulong data_addr;
+ char *data_name;
+ int data_size;
+};
+
+#define SB_FLAG_INIT 0x01
+
+static uint sb_flags = 0;
+
+static inline unsigned int __const_hweight8(unsigned long w)
+{
+ return
+ (!!((w) & (1ULL << 0))) +
+ (!!((w) & (1ULL << 1))) +
+ (!!((w) & (1ULL << 2))) +
+ (!!((w) & (1ULL << 3))) +
+ (!!((w) & (1ULL << 4))) +
+ (!!((w) & (1ULL << 5))) +
+ (!!((w) & (1ULL << 6))) +
+ (!!((w) & (1ULL << 7)));
+}
+
+#define __const_hweight16(w) (__const_hweight8(w) + __const_hweight8((w) >> 8))
+#define __const_hweight32(w) (__const_hweight16(w) + __const_hweight16((w) >> 16))
+#define __const_hweight64(w) (__const_hweight32(w) + __const_hweight32((w) >> 32))
+
+#define hweight32(w) __const_hweight32(w)
+#define hweight64(w) __const_hweight64(w)
+
+#define BIT(nr) (1UL << (nr))
+
+static inline unsigned long min(unsigned long a, unsigned long b)
+{
+ return (a < b) ? a : b;
+}
+
+static unsigned long __last_word_mask(unsigned long nbits)
+{
+ return ~0UL >> (-(nbits) & (BITS_PER_LONG - 1));
+}
+
+static unsigned long bitmap_hweight_long(unsigned long w)
+{
+ return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
+}
+
+static unsigned long bitmap_weight(unsigned long bitmap, unsigned int bits)
+{
+ unsigned long w = 0;
+
+ w += bitmap_hweight_long(bitmap);
+ if (bits % BITS_PER_LONG)
+ w += bitmap_hweight_long(bitmap & __last_word_mask(bits));
+
+ return w;
+}
+
+static unsigned int __sbitmap_weight(const struct sbitmap_context *sc, bool set)
+{
+ const ulong sbitmap_word_size = SIZE(sbitmap_word);
+ const ulong w_depth_off = OFFSET(sbitmap_word_depth);
+ const ulong w_word_off = OFFSET(sbitmap_word_word);
+ const ulong w_cleared_off = OFFSET(sbitmap_word_cleared);
+
+ unsigned int weight = 0;
+ ulong addr = sc->map_addr;
+ ulong depth, word, cleared;
+ char *sbitmap_word_buf;
+ int i;
+
+ sbitmap_word_buf = GETBUF(sbitmap_word_size);
+
+ for (i = 0; i < sc->map_nr; i++) {
+ readmem(addr, KVADDR, sbitmap_word_buf, sbitmap_word_size, "sbitmap_word", FAULT_ON_ERROR);
+
+ depth = ULONG(sbitmap_word_buf + w_depth_off);
+
+ if (set) {
+ word = ULONG(sbitmap_word_buf + w_word_off);
+ weight += bitmap_weight(word, depth);
+ } else {
+ cleared = ULONG(sbitmap_word_buf + w_cleared_off);
+ weight += bitmap_weight(cleared, depth);
+ }
+
+ addr += sbitmap_word_size;
+ }
+
+ FREEBUF(sbitmap_word_buf);
+
+ return weight;
+}
+
+static unsigned int sbitmap_weight(const struct sbitmap_context *sc)
+{
+ return __sbitmap_weight(sc, true);
+}
+
+static unsigned int sbitmap_cleared(const struct sbitmap_context *sc)
+{
+ return __sbitmap_weight(sc, false);
+}
+
+static void sbitmap_emit_byte(unsigned int offset, uint8_t byte)
+{
+ if ((offset &0xf) == 0) {
+ if (offset != 0)
+ fputc('\n', fp);
+ fprintf(fp, "%08x:", offset);
+ }
+ if ((offset & 0x1) == 0)
+ fputc(' ', fp);
+ fprintf(fp, "%02x", byte);
+}
+
+static void sbitmap_bitmap_show(const struct sbitmap_context *sc)
+{
+ const ulong sbitmap_word_size = SIZE(sbitmap_word);
+ const ulong w_depth_off = OFFSET(sbitmap_word_depth);
+ const ulong w_word_off = OFFSET(sbitmap_word_word);
+ const ulong w_cleared_off = OFFSET(sbitmap_word_cleared);
+
+ uint8_t byte = 0;
+ unsigned int byte_bits = 0;
+ unsigned int offset = 0;
+ ulong addr = sc->map_addr;
+ char *sbitmap_word_buf;
+ int i;
+
+ sbitmap_word_buf = GETBUF(sbitmap_word_size);
+
+ for (i = 0; i < sc->map_nr; i++) {
+ unsigned long word, cleared, word_bits;
+
+ readmem(addr, KVADDR, sbitmap_word_buf, sbitmap_word_size, "sbitmap_word", FAULT_ON_ERROR);
+
+ word = ULONG(sbitmap_word_buf + w_word_off);
+ cleared = ULONG(sbitmap_word_buf + w_cleared_off);
+ word_bits = ULONG(sbitmap_word_buf + w_depth_off);
+
+ word &= ~cleared;
+
+ while (word_bits > 0) {
+ unsigned int bits = min(8 - byte_bits, word_bits);
+
+ byte |= (word & (BIT(bits) - 1)) << byte_bits;
+ byte_bits += bits;
+ if (byte_bits == 8) {
+ sbitmap_emit_byte(offset, byte);
+ byte = 0;
+ byte_bits = 0;
+ offset++;
+ }
+ word >>= bits;
+ word_bits -= bits;
+ }
+
+ addr += sbitmap_word_size;
+ }
+ if (byte_bits) {
+ sbitmap_emit_byte(offset, byte);
+ offset++;
+ }
+ if (offset)
+ fputc('\n', fp);
+
+ FREEBUF(sbitmap_word_buf);
+}
+
+static unsigned long sbitmap_find_next_bit(unsigned long word,
+ unsigned long size, unsigned long offset)
+{
+ if (size > BITS_PER_LONG)
+ error(FATAL, "%s: word size isn't correct\n", __func__);
+
+ for (; offset < size; offset++)
+ if (word & (1UL << offset))
+ return offset;
+
+ return size;
+}
+
+static void __sbitmap_for_each_set(const struct sbitmap_context *sc,
+ unsigned int start, sbitmap_for_each_fn fn, void *data)
+{
+ const ulong sbitmap_word_size = SIZE(sbitmap_word);
+ const ulong w_depth_off = OFFSET(sbitmap_word_depth);
+ const ulong w_word_off = OFFSET(sbitmap_word_word);
+ const ulong w_cleared_off = OFFSET(sbitmap_word_cleared);
+
+ unsigned int index;
+ unsigned int nr;
+ unsigned int scanned = 0;
+ char *sbitmap_word_buf;
+
+ sbitmap_word_buf = GETBUF(sbitmap_word_size);
+
+ if (start >= sc->map_nr)
+ start = 0;
+
+ index = start >> sc->shift;
+ nr = start & ((1U << sc->shift) - 1U);
+
+ while (scanned < sc->depth) {
+ unsigned long w_addr = sc->map_addr + (sbitmap_word_size * index);
+ unsigned long w_depth, w_word, w_cleared;
+ unsigned long word, depth;
+
+ readmem(w_addr, KVADDR, sbitmap_word_buf, sbitmap_word_size, "sbitmap_word", FAULT_ON_ERROR);
+
+ w_depth = ULONG(sbitmap_word_buf + w_depth_off);
+ w_word = ULONG(sbitmap_word_buf + w_word_off);
+ w_cleared = ULONG(sbitmap_word_buf + w_cleared_off);
+
+ depth = min(w_depth - nr, sc->depth - scanned);
+
+ scanned += depth;
+ word = w_word & ~w_cleared;
+ if (!word)
+ goto next;
+
+ /*
+ * On the first iteration of the outer loop, we need to add the
+ * bit offset back to the size of the word for find_next_bit().
+ * On all other iterations, nr is zero, so this is a noop.
+ */
+ depth += nr;
+ while (1) {
+ nr = sbitmap_find_next_bit(word, depth, nr);
+ if (nr >= depth)
+ break;
+ if (!fn((index << sc->shift) + nr, data))
+ return;
+
+ nr++;
+ }
+next:
+ nr = 0;
+ if (++index >= sc->map_nr)
+ index = 0;
+ }
+
+ FREEBUF(sbitmap_word_buf);
+}
+
+void sbitmap_for_each_set(const struct sbitmap_context *sc,
+ sbitmap_for_each_fn fn, void *data)
+{
+ __sbitmap_for_each_set(sc, 0, fn, data);
+}
+
+static void sbitmap_queue_show(const struct sbitmap_queue_context *sqc,
+ const struct sbitmap_context *sc)
+{
+ int cpus = get_cpus_possible();
+ int sbq_wait_state_size, wait_cnt_off, wait_off, list_head_off;
+ char *sbq_wait_state_buf;
+ bool first;
+ int i;
+
+ fprintf(fp, "depth = %u\n", sc->depth);
+ fprintf(fp, "busy = %u\n", sbitmap_weight(sc) - sbitmap_cleared(sc));
+ fprintf(fp, "cleared = %u\n", sbitmap_cleared(sc));
+ fprintf(fp, "bits_per_word = %u\n", 1U << sc->shift);
+ fprintf(fp, "map_nr = %u\n", sc->map_nr);
+
+ fputs("alloc_hint = {", fp);
+ first = true;
+ for (i = 0; i < cpus; i++) {
+ ulong ptr;
+ int val;
+
+ if (!first)
+ fprintf(fp, ", ");
+ first = false;
+
+ ptr = kt->__per_cpu_offset[i] + sqc->alloc_hint;
+ readmem(ptr, KVADDR, &val, sizeof(val), "alloc_hint", FAULT_ON_ERROR);
+
+ fprintf(fp, "%u", val);
+ }
+ fputs("}\n", fp);
+
+ fprintf(fp, "wake_batch = %u\n", sqc->wake_batch);
+ fprintf(fp, "wake_index = %d\n", sqc->wake_index);
+ fprintf(fp, "ws_active = %d\n", sqc->ws_active);
+
+ sbq_wait_state_size = SIZE(sbq_wait_state);
+ wait_cnt_off = OFFSET(sbq_wait_state_wait_cnt);
+ wait_off = OFFSET(sbq_wait_state_wait);
+ list_head_off = OFFSET(wait_queue_head_head);
+
+ sbq_wait_state_buf = GETBUF(sbq_wait_state_size);
+
+ fputs("ws = {\n", fp);
+ for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
+ ulong ws_addr = sqc->ws_addr + (sbq_wait_state_size * i);
+ struct kernel_list_head *lh;
+ ulong wait_cnt_addr, list_head_addr;
+ ulong wait_cnt;
+
+ readmem(ws_addr, KVADDR, sbq_wait_state_buf, sbq_wait_state_size, "sbq_wait_state", FAULT_ON_ERROR);
+
+ wait_cnt = INT(sbq_wait_state_buf + wait_cnt_off);
+ lh = (struct kernel_list_head *)(sbq_wait_state_buf + wait_off + list_head_off);
+
+ fprintf(fp, "\t{ .wait_cnt = %lu, .wait = %s },\n",
+ wait_cnt, (lh->next == lh->prev) ? "inactive" : "active");
+ }
+ fputs("}\n", fp);
+
+ FREEBUF(sbq_wait_state_buf);
+
+ fprintf(fp, "round_robin = %d\n", sqc->round_robin);
+ fprintf(fp, "min_shallow_depth = %u\n", sqc->min_shallow_depth);
+}
+
+static void sbitmap_queue_context_load(ulong addr, struct sbitmap_queue_context *sqc)
+{
+ char *sbitmap_queue_buf;
+
+ sqc->sb_addr = addr + OFFSET(sbitmap_queue_sb);
+
+ sbitmap_queue_buf = GETBUF(SIZE(sbitmap_queue));
+ readmem(addr, KVADDR, sbitmap_queue_buf, SIZE(sbitmap_queue), "sbitmap_queue", FAULT_ON_ERROR);
+
+ sqc->alloc_hint = ULONG(sbitmap_queue_buf + OFFSET(sbitmap_queue_alloc_hint));
+ sqc->wake_batch = UINT(sbitmap_queue_buf + OFFSET(sbitmap_queue_wake_batch));
+ sqc->wake_index = INT(sbitmap_queue_buf + OFFSET(sbitmap_queue_wake_index));
+ sqc->ws_addr = ULONG(sbitmap_queue_buf + OFFSET(sbitmap_queue_ws));
+ sqc->ws_active = INT(sbitmap_queue_buf + OFFSET(sbitmap_queue_ws_active));
+ sqc->round_robin = BOOL(sbitmap_queue_buf + OFFSET(sbitmap_queue_round_robin));
+ sqc->min_shallow_depth = UINT(sbitmap_queue_buf + OFFSET(sbitmap_queue_min_shallow_depth));
+
+ FREEBUF(sbitmap_queue_buf);
+}
+
+void sbitmap_context_load(ulong addr, struct sbitmap_context *sc)
+{
+ char *sbitmap_buf;
+
+ sbitmap_buf = GETBUF(SIZE(sbitmap));
+ readmem(addr, KVADDR, sbitmap_buf, SIZE(sbitmap), "sbitmap", FAULT_ON_ERROR);
+
+ sc->depth = UINT(sbitmap_buf + OFFSET(sbitmap_depth));
+ sc->shift = UINT(sbitmap_buf + OFFSET(sbitmap_shift));
+ sc->map_nr = UINT(sbitmap_buf + OFFSET(sbitmap_map_nr));
+ sc->map_addr = ULONG(sbitmap_buf + OFFSET(sbitmap_map));
+
+ FREEBUF(sbitmap_buf);
+}
+
+static bool for_each_func(unsigned int idx, void *p)
+{
+ struct sbitmapq_ops *ops = p;
+ ulong addr = ops->addr + (ops->size * idx);
+
+ return ops->fn(idx, addr, ops->p);
+}
+
+void sbitmapq_for_each_set(ulong addr, struct sbitmapq_ops *ops)
+{
+ struct sbitmap_queue_context sqc = {0};
+ struct sbitmap_context sc = {0};
+
+ sbitmap_queue_context_load(addr, &sqc);
+ sbitmap_context_load(sqc.sb_addr, &sc);
+
+ sbitmap_for_each_set(&sc, for_each_func, ops);
+}
+
+static void dump_struct_members(const char *s, ulong addr, unsigned radix)
+{
+ int i, argc;
+ char *p1, *p2;
+ char *structname, *members;
+ char *arglist[MAXARGS];
+
+ structname = GETBUF(strlen(s) + 1);
+ members = GETBUF(strlen(s) + 1);
+
+ strcpy(structname, s);
+ p1 = strstr(structname, ".") + 1;
+
+ p2 = strstr(s, ".") + 1;
+ strcpy(members, p2);
+ replace_string(members, ",", ' ');
+ argc = parse_line(members, arglist);
+
+ for (i = 0; i < argc; i++) {
+ *p1 = NULLCHAR;
+ strcat(structname, arglist[i]);
+ dump_struct_member(structname, addr, radix);
+ }
+
+ FREEBUF(structname);
+ FREEBUF(members);
+}
+
+static bool sbitmap_data_print(unsigned int idx, ulong addr, void *p)
+{
+ const struct sbitmapq_data *sd = p;
+ bool verbose = !!(sd->flags & VERBOSE);
+ bool members = !!(sd->flags & SBITMAPQ_DATA_FLAG_STRUCT_MEMBER);
+
+ if (verbose) {
+ fprintf(fp, "%d (0x%08lx):\n", idx, addr);
+ if (members)
+ dump_struct_members(sd->data_name, addr, sd->radix);
+ else
+ dump_struct(sd->data_name, addr, sd->radix);
+ } else
+ fprintf(fp, "%d: 0x%08lx\n", idx, addr);
+
+ return true;
+}
+
+static void sbitmap_queue_data_dump(struct sbitmapq_data *sd)
+{
+ struct sbitmapq_ops ops = {
+ .addr = sd->data_addr,
+ .size = sd->data_size,
+ .fn = sbitmap_data_print,
+ .p = sd
+ };
+
+ sbitmapq_for_each_set(sd->addr, &ops);
+}
+
+static void sbitmap_queue_dump(const struct sbitmapq_data *sd)
+{
+ struct sbitmap_queue_context sqc ={0};
+ struct sbitmap_context sc = {0};
+
+ sbitmap_queue_context_load(sd->addr, &sqc);
+ sbitmap_context_load(sqc.sb_addr, &sc);
+
+ sbitmap_queue_show(&sqc, &sc);
+ fputc('\n', fp);
+ sbitmap_bitmap_show(&sc);
+}
+
+void sbitmapq_init(void)
+{
+ if (sb_flags & SB_FLAG_INIT)
+ return;
+
+ STRUCT_SIZE_INIT(sbitmap_word, "sbitmap_word");
+ STRUCT_SIZE_INIT(sbitmap, "sbitmap");
+ STRUCT_SIZE_INIT(sbitmap_queue, "sbitmap_queue");
+ STRUCT_SIZE_INIT(sbq_wait_state, "sbq_wait_state");
+
+ MEMBER_OFFSET_INIT(sbitmap_word_depth, "sbitmap_word", "depth");
+ MEMBER_OFFSET_INIT(sbitmap_word_word, "sbitmap_word", "word");
+ MEMBER_OFFSET_INIT(sbitmap_word_cleared, "sbitmap_word", "cleared");
+
+ MEMBER_OFFSET_INIT(sbitmap_depth, "sbitmap", "depth");
+ MEMBER_OFFSET_INIT(sbitmap_shift, "sbitmap", "shift");
+ MEMBER_OFFSET_INIT(sbitmap_map_nr, "sbitmap", "map_nr");
+ MEMBER_OFFSET_INIT(sbitmap_map, "sbitmap", "map");
+
+ MEMBER_OFFSET_INIT(sbitmap_queue_sb, "sbitmap_queue", "sb");
+ MEMBER_OFFSET_INIT(sbitmap_queue_alloc_hint, "sbitmap_queue", "alloc_hint");
+ MEMBER_OFFSET_INIT(sbitmap_queue_wake_batch, "sbitmap_queue", "wake_batch");
+ MEMBER_OFFSET_INIT(sbitmap_queue_wake_index, "sbitmap_queue", "wake_index");
+ MEMBER_OFFSET_INIT(sbitmap_queue_ws, "sbitmap_queue", "ws");
+ MEMBER_OFFSET_INIT(sbitmap_queue_ws_active, "sbitmap_queue", "ws_active");
+ MEMBER_OFFSET_INIT(sbitmap_queue_round_robin, "sbitmap_queue", "round_robin");
+ MEMBER_OFFSET_INIT(sbitmap_queue_min_shallow_depth, "sbitmap_queue", "min_shallow_depth");
+
+ MEMBER_OFFSET_INIT(sbq_wait_state_wait_cnt, "sbq_wait_state", "wait_cnt");
+ MEMBER_OFFSET_INIT(sbq_wait_state_wait, "sbq_wait_state", "wait");
+
+ if (!VALID_SIZE(sbitmap_word) ||
+ !VALID_SIZE(sbitmap) ||
+ !VALID_SIZE(sbitmap_queue) ||
+ !VALID_SIZE(sbq_wait_state) ||
+ INVALID_MEMBER(sbitmap_word_depth) ||
+ INVALID_MEMBER(sbitmap_word_word) ||
+ INVALID_MEMBER(sbitmap_word_cleared) ||
+ INVALID_MEMBER(sbitmap_depth) ||
+ INVALID_MEMBER(sbitmap_shift) ||
+ INVALID_MEMBER(sbitmap_map_nr) ||
+ INVALID_MEMBER(sbitmap_map) ||
+ INVALID_MEMBER(sbitmap_queue_sb) ||
+ INVALID_MEMBER(sbitmap_queue_alloc_hint) ||
+ INVALID_MEMBER(sbitmap_queue_wake_batch) ||
+ INVALID_MEMBER(sbitmap_queue_wake_index) ||
+ INVALID_MEMBER(sbitmap_queue_ws) ||
+ INVALID_MEMBER(sbitmap_queue_ws_active) ||
+ INVALID_MEMBER(sbitmap_queue_round_robin) ||
+ INVALID_MEMBER(sbitmap_queue_min_shallow_depth) ||
+ INVALID_MEMBER(sbq_wait_state_wait_cnt) ||
+ INVALID_MEMBER(sbq_wait_state_wait)) {
+ command_not_supported();
+ }
+
+ sb_flags |= SB_FLAG_INIT;
+}
+
+static char *__get_struct_name(const char *s)
+{
+ char *name, *p;
+
+ name = GETBUF(strlen(s) + 1);
+ strcpy(name, s);
+
+ p = strstr(name, ".");
+ *p = NULLCHAR;
+
+ return name;
+}
+
+void cmd_sbitmapq(void)
+{
+ struct sbitmapq_data sd = {0};
+ int c;
+
+ while ((c = getopt(argcnt, args, "s:p:xdv")) != EOF) {
+ switch (c) {
+ case 's':
+ if (sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_NAME)
+ error(FATAL, "-s option (%s) already entered\n", sd.data_name);
+
+ sd.data_name = optarg;
+ sd.flags |= SBITMAPQ_DATA_FLAG_STRUCT_NAME;
+
+ break;
+
+ case 'p':
+ if (sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_ADDR)
+ error(FATAL, "-p option (0x%lx) already entered\n", sd.data_addr);
+ else if (!IS_A_NUMBER(optarg))
+ error(FATAL, "invalid -p option: %s\n", optarg);
+
+ sd.data_addr = htol(optarg, FAULT_ON_ERROR, NULL);
+ if (!IS_KVADDR(sd.data_addr))
+ error(FATAL, "invalid kernel virtual address: %s\n", optarg);
+ sd.flags |= SBITMAPQ_DATA_FLAG_STRUCT_ADDR;
+
+ break;
+
+ case 'v':
+ sd.flags |= VERBOSE;
+ break;
+
+ case 'x':
+ if (sd.radix == 10)
+ error(FATAL, "-d and -x are mutually exclusive\n");
+ sd.radix = 16;
+ break;
+
+ case 'd':
+ if (sd.radix == 16)
+ error(FATAL, "-d and -x are mutually exclusive\n");
+ sd.radix = 10;
+ break;
+
+ default:
+ argerrs++;
+ break;
+ }
+ }
+
+ if (argerrs)
+ cmd_usage(pc->curcmd, SYNOPSIS);
+
+ if (!args[optind]) {
+ error(INFO, "command argument is required\n");
+ cmd_usage(pc->curcmd, SYNOPSIS);
+ } else if (args[optind] && args[optind + 1]) {
+ error(INFO, "too many arguments\n");
+ cmd_usage(pc->curcmd, SYNOPSIS);
+ } else if (!IS_A_NUMBER(args[optind])) {
+ error(FATAL, "invalid command argument: %s\n", args[optind]);
+ }
+
+ sd.addr = htol(args[optind], FAULT_ON_ERROR, NULL);
+ if (!IS_KVADDR(sd.addr))
+ error(FATAL, "invalid kernel virtual address: %s\n", args[optind]);
+
+ if ((sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_NAME) &&
+ !(sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_ADDR)) {
+ error(INFO, "-s option requires -p option");
+ cmd_usage(pc->curcmd, SYNOPSIS);
+ } else if ((sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_ADDR) &&
+ !(sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_NAME)) {
+ error(FATAL, "-p option is used with -s option only\n");
+ cmd_usage(pc->curcmd, SYNOPSIS);
+ }
+
+ if (sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_NAME) {
+ bool error_flag = false;
+
+ if (count_chars(sd.data_name, '.') > 0)
+ sd.flags |= SBITMAPQ_DATA_FLAG_STRUCT_MEMBER;
+
+ if (sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_MEMBER) {
+ char *data_name = __get_struct_name(sd.data_name);
+
+ sd.data_size = STRUCT_SIZE(data_name);
+ if (sd.data_size <= 0)
+ error_flag = true;
+
+ FREEBUF(data_name);
+ } else {
+ sd.data_size = STRUCT_SIZE(sd.data_name);
+ if (sd.data_size <= 0)
+ error_flag = true;
+ }
+ if (error_flag)
+ error(FATAL, "invalid data structure reference: %s\n", sd.data_name);
+ }
+
+ sbitmapq_init();
+
+ if (sd.flags & SBITMAPQ_DATA_FLAG_STRUCT_NAME)
+ sbitmap_queue_data_dump(&sd);
+ else
+ sbitmap_queue_dump(&sd);
+}
--
2.25.1
2 years, 10 months
[PATCH 1/1] memory: Handle struct slab changes in linux-next
by Alexander Egorenkov
Since linux-next commit fe1e19081321 ("mm: Split slab into its own type"),
the struct slab is used for both SLAB and SLUB. Therefore, don't depend
on the non-presence of the struct slab to decide whether SLAB implementation
should be chosen and use the member variable "cpu_slab" of the struct
kmem_cache instead, it should be present only in SLUB.
Signed-off-by: Alexander Egorenkov <egorenar(a)linux.ibm.com>
---
memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/memory.c b/memory.c
index 86c02c132890..5af45fd7d834 100644
--- a/memory.c
+++ b/memory.c
@@ -576,7 +576,8 @@ vm_init(void)
STRUCT_SIZE_INIT(cpucache_s, "cpucache_s");
} else if (!VALID_STRUCT(kmem_slab_s) &&
- !VALID_STRUCT(slab_s) &&
+ !VALID_STRUCT(slab_s) &&
+ !MEMBER_EXISTS("kmem_cache", "cpu_slab") &&
(VALID_STRUCT(slab) || (vt->flags & SLAB_OVERLOAD_PAGE))) {
vt->flags |= PERCPU_KMALLOC_V2;
--
2.31.1
2 years, 11 months