[PATCH] Fix for compiler warnings on 32-bit architectures
by HAGIO KAZUHITO(萩尾 一仁)
Suppress compiler warnings "warning: format '%ld' expects argument
of type 'long int', but argument 4 has type 'uint64_t' [-Wformat=]"
and similar ones generated on 32-bit architectures as a result of
commit 3fedbee9bfbb ("vmware_guestdump: new input format").
Signed-off-by: Kazuhito Hagio <k-hagio-ab(a)nec.com>
---
I missed those, I think we should fix before the release.
vmware_guestdump.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/vmware_guestdump.c b/vmware_guestdump.c
index d3fac5944026..62da0a77a227 100644
--- a/vmware_guestdump.c
+++ b/vmware_guestdump.c
@@ -292,9 +292,9 @@ int
vmware_guestdump_memory_dump(FILE *ofp)
{
fprintf(ofp, "vmware_guestdump:\n");
- fprintf(ofp, " Header: version=%d num_vcpus=%ld\n",
- GUESTDUMP_VERSION, vmss.num_vcpus);
- fprintf(ofp, "Total memory: %ld\n", vmss.memsize);
+ fprintf(ofp, " Header: version=%d num_vcpus=%llu\n",
+ GUESTDUMP_VERSION, (ulonglong)vmss.num_vcpus);
+ fprintf(ofp, "Total memory: %llu\n", (ulonglong)vmss.memsize);
if (vmss.regionscount > 1) {
uint64_t holes_sum = 0;
@@ -303,11 +303,11 @@ vmware_guestdump_memory_dump(FILE *ofp)
fprintf(ofp, "Memory regions[%d]:\n", vmss.regionscount);
fprintf(ofp, " [0x%016x-", 0);
for (i = 0; i < vmss.regionscount - 1; i++) {
- fprintf(ofp, "0x%016lx]\n", (uint64_t)vmss.regions[i].startpagenum << VMW_PAGE_SHIFT);
- fprintf(ofp, " [0x%016lx-", (uint64_t)vmss.regions[i].startppn << VMW_PAGE_SHIFT);
+ fprintf(ofp, "0x%016llx]\n", (ulonglong)vmss.regions[i].startpagenum << VMW_PAGE_SHIFT);
+ fprintf(ofp, " [0x%016llx-", (ulonglong)vmss.regions[i].startppn << VMW_PAGE_SHIFT);
holes_sum += vmss.regions[i].startppn - vmss.regions[i].startpagenum;
}
- fprintf(ofp, "0x%016lx]\n", vmss.memsize + (holes_sum << VMW_PAGE_SHIFT));
+ fprintf(ofp, "0x%016llx]\n", (ulonglong)vmss.memsize + (holes_sum << VMW_PAGE_SHIFT));
}
return TRUE;
4 years
[PATCH] task.c: avoid unnecessary cpu cycles during init
by Hari Bathini
While stkptr_to_task does the job of trying to match a stack pointer
to a task, it runs through each task's stack to find whether the given
SP falls into its range. This can be a very expensive operation, if
the vmcore is from a system running too many tasks. It can get even
worse when the total number of CPUs on the system is in the order of
thousands. Given the expensive nature of the operation, it must be
optimized as much as possible. Possible options to optimize:
1) Get min & max of the stack range in first pass and use these
values against the given SP to decide whether or not to proceed
with stack lookup.
2) Use multithreading to parallely update irq_tasks.
3) Skip stkptr_to_task() when SP is 0
Though option 3 is a low hanging fruit, it significantly improved the
time taken between starting crash utility & reaching crash prompt.
Implement option 3 to optimize while listing the other two options
as TODO items for follow-up.
Signed-off-by: Hari Bathini <hbathini(a)linux.ibm.com>
---
On a system with about 1500 CPUs 165K running tasks, it was taking
about a day to get to the crash prompt without this patch, while it
takes only about 5-10 mins with this change..
task.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/task.c b/task.c
index 8dd2b96..423cd45 100644
--- a/task.c
+++ b/task.c
@@ -713,6 +713,7 @@ irqstacks_init(void)
} else
error(WARNING, "cannot determine hardirq_ctx addresses\n");
+ /* TODO: Use multithreading to parallely update irq_tasks. */
for (i = 0; i < NR_CPUS; i++) {
if (!(tt->hardirq_ctx[i]))
continue;
@@ -5005,6 +5006,10 @@ pid_exists(ulong pid)
/*
* Translate a stack pointer to a task, dealing with possible split.
* If that doesn't work, check the hardirq_stack and softirq_stack.
+ *
+ * TODO: This function can be optimized by getting min & max of the
+ * stack range in first pass and use these values against the
+ * given SP to decide whether or not to proceed with stack lookup.
*/
ulong
stkptr_to_task(ulong sp)
@@ -5013,6 +5018,9 @@ stkptr_to_task(ulong sp)
struct task_context *tc;
struct bt_info bt_info, *bt;
+ if (!sp)
+ return NO_TASK;
+
bt = &bt_info;
tc = FIRST_CONTEXT();
for (i = 0; i < RUNNING_TASKS(); i++, tc++) {
4 years
Re: [Crash-utility] [PATCH] x86_64: Add support for new divide_error name
by lijiang
在 2020年10月31日 00:00, crash-utility-request(a)redhat.com 写道:
> Message: 1
> Date: Fri, 30 Oct 2020 12:44:57 +0200
> From: Nikolay Borisov <nborisov(a)suse.com>
> To: crash-utility(a)redhat.com
> Cc: Nikolay Borisov <nborisov(a)suse.com>
> Subject: [Crash-utility] [PATCH] x86_64: Add support for new
> divide_error name
> Message-ID: <20201030104457.3472472-1-nborisov(a)suse.com>
> Content-Type: text/plain; charset="US-ASCII"
>
> Upstream kernel commit 9d06c4027f21 ("x86/entry: Convert Divide Error to IDTENTRY")
> renamed divide_error handler to asm_exc_divide_error. This breaks kaslr
> offser derivation when we crash tries to open a qemu image dump. Fix it
> by also checking symbols for the presence of the new name.
>
> Signed-off-by: Nikolay Borisov <nborisov(a)suse.com>
> ---
> symbols.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/symbols.c b/symbols.c
> index 70b1455750ee..e3594ce0ed48 100644
> --- a/symbols.c
> +++ b/symbols.c
> @@ -12711,9 +12711,11 @@ numeric_forward(const void *P_x, const void *P_y)
>
> if (SADUMP_DUMPFILE() || QEMU_MEM_DUMP_NO_VMCOREINFO() || VMSS_DUMPFILE()) {
> /* Need for kaslr_offset and phys_base */
> - if (STREQ(x->name, "divide_error"))
> + if (STREQ(x->name, "divide_error") ||
> + STREQ(x->name, "asm_exc_divide_error"))
> st->divide_error_vmlinux = valueof(x);
> - else if (STREQ(y->name, "divide_error"))
> + else if (STREQ(y->name, "divide_error") ||
> + STREQ(y->name, "asm_exc_divide_error"))
> st->divide_error_vmlinux = valueof(y);
>
> if (STREQ(x->name, "idt_table"))
> -- 2.25.1
This looks good to me. Acked-by: Lianbo Jiang <lijiang(a)redhat.com>
Thanks.
4 years
Re: [Crash-utility] [PATCH 2/2] kaslr: get offset by walking page tree
by lijiang
Hi, Alexey
Thanks for the patch.
在 2020年10月17日 00:00, crash-utility-request(a)redhat.com 写道:
> Date: Thu, 15 Oct 2020 13:44:32 -0700
> From: Alexey Makhalov <amakhalov(a)vmware.com>
> To: <crash-utility(a)redhat.com>, <amakhalov(a)vmware.com>
> Subject: [Crash-utility] [PATCH 2/2] kaslr: get offset by walking page
> tree
> Message-ID: <20201015204432.4695-3-amakhalov(a)vmware.com>
> Content-Type: text/plain
>
> This method requires only valid CR3. It walks through
> page tree starting from __START_KERNEL_map to get real
> _stext and its physical address.
> It is used as backup method to get kaslr offset, if
> IDTR is not valid (zeroed). It might happen when kernel
> invalidates IDT, for example triggering triple fault on
> reboot (reboot=t cmdline).
>
> This method does not support PTI (Page Table Isolation)
> case where CR3 points to the isolated page tree. So, use
> it only when CR3 points to "full" kernel.
>
> Signed-off-by: Alexey Makhalov <amakhalov(a)vmware.com>
> ---
> kaslr_helper.c | 115 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 115 insertions(+)
>
> diff --git a/kaslr_helper.c b/kaslr_helper.c
> index bb19e54..f11cb55 100644
> --- a/kaslr_helper.c
> +++ b/kaslr_helper.c
> @@ -322,6 +322,104 @@ quit:
> }
>
> /*
> + * Find virtual (VA) and physical (PA) addresses of kernel start
> + *
> + * va:
> + * Actual address of the kernel start (_stext) placed
> + * randomly by kaslr feature. To be more accurate,
> + * VA = _stext(from vmlinux) + kaslr_offset
> + *
> + * pa:
> + * Physical address where the kerenel is placed.
> + *
> + * In nokaslr case, VA = _stext (from vmlinux)
> + * In kaslr case, virtual address of the kernel placement goes
> + * in this range: ffffffff80000000..ffffffff9fffffff, or
> + * __START_KERNEL_map..+512MB
> + *
> + * https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt
> + *
> + * Randomized VA will be the first valid page starting from
> + * ffffffff80000000 (__START_KERNEL_map). Page tree entry of
> + * this page will contain the PA of the kernel start.
> + *
> + * NOTES:
> + * 1. This method does not support PTI (Page Table Isolation)
> + * case where CR3 points to the isolated page tree.
> + * 2. 4-level paging support only, as caller (calc_kaslr_offset)
> + * does not support 5-level paging.
Would it be good to add a flag checking for the 5-level paging before calling
the find_kernel_start()? Seems that we can not check the machdep->flags with
the 'machdep->flags & VM_5LEVEL', because that is not initialized.
But, is that possible to check the CR4(vmss.regs64[0]->cr[4]) on x86 or the
symbols '__pgtable_l5_enabled'?
Thanks.
Lianbo
> + */
> +static int
> +find_kernel_start(ulong *va, ulong *pa)
> +{
> + int i, pgd_idx, pud_idx, pmd_idx, pte_idx;
> + uint64_t pgd_pte, pud_pte, pmd_pte, pte;
> +
> + pgd_idx = pgd_index(__START_KERNEL_map);
> + pud_idx = pud_index(__START_KERNEL_map);
> + pmd_idx = pmd_index(__START_KERNEL_map);
> + pte_idx = pte_index(__START_KERNEL_map);
> +
> + for (; pgd_idx < PTRS_PER_PGD; pgd_idx++) {
> + pgd_pte = ULONG(machdep->pgd + pgd_idx * sizeof(uint64_t));
> + if (pgd_pte & _PAGE_PRESENT)
> + break;
> + pud_idx = pmd_idx = pte_idx = 0;
> + }
> + if (pgd_idx == PTRS_PER_PGD)
> + return FALSE;
> +
> + FILL_PUD(pgd_pte & PHYSICAL_PAGE_MASK, PHYSADDR, PAGESIZE());
> + for (; pud_idx < PTRS_PER_PUD; pud_idx++) {
> + pud_pte = ULONG(machdep->pud + pud_idx * sizeof(uint64_t));
> + if (pud_pte & _PAGE_PRESENT)
> + break;
> + pmd_idx = pte_idx = 0;
> + }
> + if (pud_idx == PTRS_PER_PUD)
> + return FALSE;
> + if (pud_pte & _PAGE_PSE) {
> + /* 1GB page */
> + *va = (~__VIRTUAL_MASK) | ((ulong)pgd_idx << __PGDIR_SHIFT) |
> + ((ulong)pud_idx << PUD_SHIFT);
> + *pa = pud_pte & PHYSICAL_PAGE_MASK;
> + return TRUE;
> + }
> +
> + FILL_PMD(pud_pte & PHYSICAL_PAGE_MASK, PHYSADDR, PAGESIZE());
> + for (; pmd_idx < PTRS_PER_PMD; pmd_idx++) {
> + pmd_pte = ULONG(machdep->pmd + pmd_idx * sizeof(uint64_t));
> + if (pmd_pte & _PAGE_PRESENT)
> + break;
> + pte_idx = 0;
> + }
> + if (pmd_idx == PTRS_PER_PMD)
> + return FALSE;
> + if (pmd_pte & _PAGE_PSE) {
> + /* 2MB page */
> + *va = (~__VIRTUAL_MASK) | ((ulong)pgd_idx << __PGDIR_SHIFT) |
> + ((ulong)pud_idx << PUD_SHIFT) | (pmd_idx << PMD_SHIFT);
> + *pa = pmd_pte & PHYSICAL_PAGE_MASK;
> + return TRUE;
> + }
> +
> + FILL_PTBL(pmd_pte & PHYSICAL_PAGE_MASK, PHYSADDR, PAGESIZE());
> + for (; pte_idx < PTRS_PER_PTE; pte_idx++) {
> + pte = ULONG(machdep->ptbl + pte_idx * sizeof(uint64_t));
> + if (pte & _PAGE_PRESENT)
> + break;
> + }
> + if (pte_idx == PTRS_PER_PTE)
> + return FALSE;
> +
> + *va = (~__VIRTUAL_MASK) | ((ulong)pgd_idx << __PGDIR_SHIFT) |
> + ((ulong)pud_idx << PUD_SHIFT) | (pmd_idx << PMD_SHIFT) |
> + (pte_idx << PAGE_SHIFT);
> + *pa = pmd_pte & PHYSICAL_PAGE_MASK;
> + return TRUE;
> +}
> +
> +/*
> * Calculate kaslr_offset and phys_base
> *
> * kaslr_offset:
> @@ -445,6 +543,22 @@ retry:
> goto quit;
> }
>
> + if (idtr == 0 && st->_stext_vmlinux && (st->_stext_vmlinux != UNINITIALIZED)) {
> + ulong va, pa;
> + ret = find_kernel_start(&va, &pa);
> + if (ret == FALSE)
> + goto quit;
> + if (CRASHDEBUG(1)) {
> + fprintf(fp, "calc_kaslr_offset: _stext(vmlinux): %lx\n", st->_stext_vmlinux);
> + fprintf(fp, "calc_kaslr_offset: kernel start VA: %lx\n", va);
> + fprintf(fp, "calc_kaslr_offset: kernel start PA: %lx\n", pa);
> + }
> + kaslr_offset = va - st->_stext_vmlinux;
> + phys_base = pa - (va - __START_KERNEL_map);
> +
> + goto found;
> + }
> +
> /* Convert virtual address of IDT table to physical address */
> if (!kvtop(NULL, idtr, &idtr_paddr, verbose)) {
> if (SADUMP_DUMPFILE())
> @@ -505,6 +619,7 @@ retry:
> fprintf(fp, "kaslr_helper: asssuming the kdump 1st kernel.\n");
> }
>
> +found:
> if (CRASHDEBUG(1)) {
> fprintf(fp, "calc_kaslr_offset: kaslr_offset=%lx\n",
> kaslr_offset);
> -- 2.11.0
4 years
[PATCH] memory_driver: Fix memory driver module build with Linux 5.4 and later
by HAGIO KAZUHITO(萩尾 一仁)
With Linux 5.4 and later kernels that contain commit
7e35b42591c058b91282f95ce3b2cf0c05ffe93d ("kbuild: remove SUBDIRS
support"), "make" command in the memory_driver directory doesn't
build crash memory driver module as expected. Add "M=" to fix.
Signed-off-by: Kazuhito Hagio <k-hagio-ab(a)nec.com>
---
memory_driver/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/memory_driver/Makefile b/memory_driver/Makefile
index e84afe98ff24..b494aa3cd184 100644
--- a/memory_driver/Makefile
+++ b/memory_driver/Makefile
@@ -10,6 +10,6 @@
#
obj-m := crash.o
all:
- make -C /lib/modules/`uname -r`/build SUBDIRS=${PWD} modules
+ make -C /lib/modules/`uname -r`/build M=${PWD} SUBDIRS=${PWD} modules
clean:
rm -f *.mod.c *.ko *.o Module.*
4 years
[PATCH] Fix memory driver module build with kernel 5.8+
by Petr Tesarik
Kernel commit fe557319aa06c23cffc9346000f119547e0f289a renamed
probe_kernel_{read,write} to copy_{from,to}_kernel_nofault.
Additionally, commit 0493cb086353e786be56010780a0b7025b5db34c
unexported probe_kernel_write(), so writing kernel memory is
no longer possible from a module.
I have renamed the functions in source, but I'm also adding wrappers to
allow building the module with older kernel versions.
Without this patch, build with kernel 5.8 and later fails:
kbuild/default/crash.c: In function 'crash_write':
kbuild/default/crash.c:189:12: error: implicit declaration of function 'probe_kernel_write'; did you mean 'kernel_write'? [-Werror=implicit-function-declaration]
189 | if (probe_kernel_write(vaddr, buffer, count)) {
| ^~~~~~~~~~~~~~~~~~
| kernel_write
kbuild/default/crash.c: In function 'crash_read':
kbuild/default/crash.c:225:13: error: implicit declaration of function 'probe_kernel_read'; did you mean 'kernel_read'? [-Werror=implicit-function-declaration]
225 | if (probe_kernel_read(buffer, vaddr, count)) {
| ^~~~~~~~~~~~~~~~~
| kernel_read
Signed-off-by: Petr Tesarik <ptesarik(a)suse.com>
---
memory_driver/crash.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
--- a/memory_driver/crash.c
+++ b/memory_driver/crash.c
@@ -25,6 +25,7 @@
*****************************************************************************/
#include <linux/module.h>
+#include <linux/version.h>
#include <linux/types.h>
#include <linux/miscdevice.h>
#include <linux/init.h>
@@ -37,6 +38,22 @@
extern int page_is_ram(unsigned long);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 8, 0)
+
+#define CAN_WRITE_KERNEL 1
+
+static inline long copy_from_kernel_nofault(void *dst, const void *src, size_t size)
+{
+ return probe_kernel_read(dst, src, size);
+}
+
+static inline long copy_to_kernel_nofault(void *dst, const void *src, size_t size)
+{
+ return probe_kernel_write(dst, src, size);
+}
+
+#endif
+
#ifdef CONFIG_S390
/*
* For swapped prefix pages get bounce buffer using xlate_dev_mem_ptr()
@@ -160,6 +177,8 @@ crash_llseek(struct file * file, loff_t
}
}
+#ifdef CAN_WRITE_KERNEL
+
static ssize_t
crash_write(struct file *file, const char *buf, size_t count, loff_t *poff)
{
@@ -186,7 +205,7 @@ crash_write(struct file *file, const cha
return -EFAULT;
}
- if (probe_kernel_write(vaddr, buffer, count)) {
+ if (copy_to_kernel_nofault(vaddr, buffer, count)) {
unmap_virtual(page);
return -EFAULT;
}
@@ -197,6 +216,8 @@ crash_write(struct file *file, const cha
return written;
}
+#endif
+
/*
* Determine the page address for an address offset value,
* get a virtual address for it, and copy it out.
@@ -222,7 +243,7 @@ crash_read(struct file *file, char *buf,
* Use bounce buffer to bypass the CONFIG_HARDENED_USERCOPY
* kernel text restriction.
*/
- if (probe_kernel_read(buffer, vaddr, count)) {
+ if (copy_from_kernel_nofault(buffer, vaddr, count)) {
unmap_virtual(page);
return -EFAULT;
}
@@ -294,7 +315,9 @@ static struct file_operations crash_fops
.owner = THIS_MODULE,
.llseek = crash_llseek,
.read = crash_read,
+#ifdef CAN_WRITE_KERNEL
.write = crash_write,
+#endif
.unlocked_ioctl = crash_ioctl,
.open = crash_open,
.release = crash_release,
4 years
Re: [Crash-utility] [PATCH v2 1/1] Support cross-compilation
by Bhupesh Sharma
On Wed, Oct 28, 2020 at 11:56 PM Alexander Egorenkov
<egorenar(a)posteo.net> wrote:
>
> Bhupesh Sharma <bhsharma(a)redhat.com> writes:
>
> >
> > ifneq ($(CROSS_COMPILE),)
> > SUBARCH := $(shell echo $(CROSS_COMPILE) | cut -d- -f1 | sed 's:^.*/::g')
> > else
> > SUBARCH := $(shell uname -m)
> > endif
> > SUBARCH := $(shell echo $(SUBARCH) | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \
> > -e s/arm.*/arm/ -e s/sa110/arm/ \
> > -e s/s390x/s390/ -e s/parisc64/parisc/ \
> > -e s/ppc.*/powerpc/ -e s/mips.*/mips/ )
> >
> > ARCH ?= $(SUBARCH)
>
> Hmm,
>
> but the Makefile needs to differentiate between 64 and 32bit archs, e.g. in
>
> if [ "${ARCH}" = "ppc64le" ]
>
> So "-e s/ppc.*/powerpc/" won't cut it, i think.
Sure, that was just an example, not a complete code blob :)
> Furthermore, we have to convert ARCH to TARGET to pass it to configure.c which also
> differentiates between 32 and 64 bit archs.
>
> Need to think about it more.
Sure, but I think we don't need to reinvent the wheel there.
CROSS_COMPILING user-space tools
and kernel bits have been standardized since some time and in the
respective Makefiles (you can refer to the Busybox and Linux kernel
Makefile) you can find helpful logic already existing for the same.
Thanks,
Bhupesh
4 years
[PATCH] x86_64: Add support for new divide_error name
by Nikolay Borisov
Upstream kernel commit 9d06c4027f21 ("x86/entry: Convert Divide Error to IDTENTRY")
renamed divide_error handler to asm_exc_divide_error. This breaks kaslr
offser derivation when we crash tries to open a qemu image dump. Fix it
by also checking symbols for the presence of the new name.
Signed-off-by: Nikolay Borisov <nborisov(a)suse.com>
---
symbols.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/symbols.c b/symbols.c
index 70b1455750ee..e3594ce0ed48 100644
--- a/symbols.c
+++ b/symbols.c
@@ -12711,9 +12711,11 @@ numeric_forward(const void *P_x, const void *P_y)
if (SADUMP_DUMPFILE() || QEMU_MEM_DUMP_NO_VMCOREINFO() || VMSS_DUMPFILE()) {
/* Need for kaslr_offset and phys_base */
- if (STREQ(x->name, "divide_error"))
+ if (STREQ(x->name, "divide_error") ||
+ STREQ(x->name, "asm_exc_divide_error"))
st->divide_error_vmlinux = valueof(x);
- else if (STREQ(y->name, "divide_error"))
+ else if (STREQ(y->name, "divide_error") ||
+ STREQ(y->name, "asm_exc_divide_error"))
st->divide_error_vmlinux = valueof(y);
if (STREQ(x->name, "idt_table"))
--
2.25.1
4 years
[PATCH 0/5] zram related changes for zram support of crash gcore command
by HATAYAMA Daisuke
This patch set is to make changes I found necessary during development
of zram support for crash gcore command.
HATAYAMA Daisuke (5):
diskdump, zram: cleanup try_zram_decompress()
diskdump, zram: initialize zram symbol information when needed
diskname, zram: fix fault error when reading zram disk with no symbol
information
diskname, zram: Notify necessity of loading zram module
memory, zram: introduce and export readswap()
defs.h | 1 +
diskdump.c | 220 ++++++++++++++++++++++++++++++++++++-------------------------
memory.c | 5 +-
3 files changed, 136 insertions(+), 90 deletions(-)
--
1.8.3.1
4 years