Hi,Hatayama
Since zram page not a existing page,so we can't use vtop find exactly physical
address,so gcore have to make a little change for this.gcore patch i've already sent
in previous mail
I've answered your question in the previous email about exactly kernel commit in
aarch64 stack,please refer to below change
The latest two changes are attached,please review.
commit 34be98f4944f99076f049a6806fc5f5207a755d3
Author: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Date: Thu Jul 20 17:15:45 2017 +0100
arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP
For historical reasons, we leave the top 16 bytes of our task and IRQ
stacks unused, a practice used to ensure that the SP can always be
masked to find the base of the current stack (historically, where
thread_info could be found).
However, this is not necessary, as:
* When an exception is taken from a task stack, we decrement the SP by
S_FRAME_SIZE and stash the exception registers before we compare the
SP against the task stack. In such cases, the SP must be at least
S_FRAME_SIZE below the limit, and can be safely masked to determine
whether the task stack is in use.
* When transitioning to an IRQ stack, we'll place a dummy frame onto the
IRQ stack before enabling asynchronous exceptions, or executing code
we expect to trigger faults. Thus, if an exception is taken from the
IRQ stack, the SP must be at least 16 bytes below the limit.
* We no longer mask the SP to find the thread_info, which is now found
via sp_el0. Note that historically, the offset was critical to ensure
that cpu_switch_to() found the correct stack for new threads that
hadn't yet executed ret_from_fork().
Given that, this initial offset serves no purpose, and can be removed.
This brings us in-line with other architectures (e.g. x86) which do not
rely on this masking.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
[Mark: rebase, kill THREAD_START_SP, commit msg additions]
Signed-off-by: Mark Rutland <mark.rutland(a)arm.com>
Reviewed-by: Will Deacon <will.deacon(a)arm.com>
Tested-by: Laura Abbott <labbott(a)redhat.com>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: James Morse <james.morse(a)arm.com>
________________________________
From: d.hatayama(a)fujitsu.com <d.hatayama(a)fujitsu.com>
Sent: Tuesday, April 14, 2020 20:16
To: 赵乾利; Dave Anderson
Cc: Discussion list for crash utility usage, maintenance and development
Subject: Re: 答复: [External Mail]Re: [Crash-utility] zram decompress support for
gcore/crash-utility
Zhao,
I don't find latest patch for gcore command, but looking at the commit
b12bdd36cf7caad24957c0b8c030001321ab2df4 in crash utility,
try_zram_decompress() is called after uvtop() in the final design,
not uvtop() being replaced by readmem(UVADDR), which can more easilly be
into the gcore source code. Please send a new patch based on the latest
design. It's very helpful if you share test set for this ZRAM support work.
Also, for the other independent aarch64 patch, as I already requested,
please tell me the kernel commit that made the corresponding change
on Aaarch64 kernel statck data structure, which I need it to check whether your
patch is correct or not and to file it for maintenance purpose.
________________________________
差出人: 赵乾利 <zhaoqianli(a)xiaomi.com>
送信日時: 2020年4月13日 22:41
宛先: Dave Anderson <anderson(a)redhat.com>
CC: Hatayama, Daisuke/畑山 大輔 <d.hatayama(a)fujitsu.com>; Discussion list for crash
utility usage, maintenance and development <crash-utility(a)redhat.com>
件名: Re: 答复: [External Mail]Re: [Crash-utility] zram decompress support for
gcore/crash-utility
Hi, hatayama
May I ask about how the patch of support the zram in gcore?support in crash is ready.
Is there anything else I can do for you?
________________________________________
From: Dave Anderson <anderson(a)redhat.com>
Sent: Monday, April 13, 2020 20:17
To: 赵乾利
Cc: d hatayama; Discussion list for crash utility usage, maintenance and development
Subject: Re: 答复: [External Mail]Re: [Crash-utility] zram decompress support for
gcore/crash-utility
----- Original Message -----
Make the mistake cause by patch update....
please re-check the new patch
In the interest of expediency, I went ahead and made a few cosmetic changes to
the comments and error message strings, and queued the patch for crash-7.2.9:
https://github.com/crash-utility/crash/commit/b12bdd36cf7caad24957c0b8c03...
Thanks,
Dave
________________________________________
From: Dave Anderson <anderson(a)redhat.com>
Sent: Friday, April 10, 2020 22:57
To: 赵乾利
Cc: d hatayama; Discussion list for crash utility usage, maintenance and
development
Subject: Re: 答复: [External Mail]Re: [Crash-utility] zram decompress support
for gcore/crash-utility
----- Original Message -----
> I got little problem to compile 32-bit on my x86-64 host..
> 96 /usr/bin/ld: skipping incompatible
> /usr/lib/gcc/x86_64-linux-gnu/4.8/libgcc.a when searching for -lgcc
> 97 /usr/bin/ld: cannot find -lgcc
> 98 /usr/bin/ld: skipping incompatible
> /usr/lib/gcc/x86_64-linux-gnu/4.8/libgcc_s.so when searching for -lgcc_s
> 99 /usr/bin/ld: cannot find -lgcc_s
>
> I think i have fixed the build warning,but failed rebuild in 32-bit since
> above error,please help confirm,and move log to try_zram_decompress,please
> check the attachment.
The patch now compiles OK, but my first simple test shows that something
is obviously wrong with the patch.
Here are a set of user-space addresses that have been swapped out to disk:
crash> set 1
PID: 1
COMMAND: "systemd"
TASK: ffff92a13a1e8000 [THREAD_INFO: ffff92a13a260000]
CPU: 2
STATE: TASK_INTERRUPTIBLE
crash> vm -p | grep SWAP
55d917fb5000 SWAP: /dev/dm-2 OFFSET: 55827
55d917fb7000 SWAP: /dev/dm-2 OFFSET: 55828
55d917fc2000 SWAP: /dev/dm-2 OFFSET: 121359
55d917fc6000 SWAP: /dev/dm-2 OFFSET: 88579
55d917fcb000 SWAP: /dev/dm-2 OFFSET: 88581
55d917fcc000 SWAP: /dev/dm-2 OFFSET: 88582
55d917fcd000 SWAP: /dev/dm-2 OFFSET: 88583
55d917fce000 SWAP: /dev/dm-2 OFFSET: 104963
55d917fcf000 SWAP: /dev/dm-2 OFFSET: 104964
...
Obviously any read of the addresses above should fail, but each
read returns successfully, and each read is screwing up the internal
buffering scheme:
crash> rd -u 55d917fb5000
55d917fb5000: 0000000000000000 ........
WARNING: malloc/free mismatch (53/54)
crash> rd -u 55d917fb7000
55d917fb7000: 0000000000000000 ........
WARNING: malloc/free mismatch (53/55)
crash> rd -u 55d917fc2000
55d917fc2000: 0000000000000000 ........
WARNING: malloc/free mismatch (53/56)
crash> rd -u 55d917fc6000
55d917fc6000: 0000000000000000 ........
WARNING: malloc/free mismatch (53/57)
crash>
Dave
#/******本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from
XIAOMI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction,
or dissemination) by persons other than the intended recipient(s) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!******/#
#/******本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from XIAOMI, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!******/#
#/******本邮件及其附件含有小米公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from XIAOMI, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!******/#