Hi all,
This is actually a known issue on ARM (just remembered that). When the crash
happens it identity maps the whole address space of the running process. This
has been fixed by upstream commit:
commit 2c8951ab0c337cb198236df07ad55f9dd4892c26
Author: Will Deacon <will.deacon(a)arm.com>
Date: Wed Jun 8 15:53:34 2011 +0100
ARM: idmap: use idmap_pgd when setting up mm for reboot
For soft-rebooting a system, it is necessary to map the MMU-off code
with an identity mapping so that execution can continue safely once the
MMU has been switched off.
Currently, switch_mm_for_reboot takes out a 1:1 mapping from 0x0 to
TASK_SIZE during reboot in the hope that the reset code lives at a
physical address corresponding to a userspace virtual address.
This patch modifies the code so that we switch to the idmap_pgd tables,
which contain a 1:1 mapping of the cpu_reset code. This has the
advantage of only remapping the code that we need and also means we
don't need to worry about allocating a pgd from an atomic context in the
case that the physical address of the cpu_reset code aliases with the
virtual space used by the kernel.
So, it actually allocates a 16k L1 page table just for this? But why
be so picky about which code is identity mapped by using the
.idmap.text section? Couldn't we just identity map all the kernel code
in that case?
I suggested a more selective and temporary modification of the current
mapping at one point:
http://thread.gmane.org/gmane.linux.kernel.kexec/4612
I guess 16k isn't worth making a fuzz about, but it just seems a
little bit wasteful...
By the way, as far as I can tell, there's no identity mapping of
'relocate_new_kernel'. Does the 'isb' instruction in cpu_v7_reset
guarantee that we're in the realm of physical addresses by then?
Regards,
Per