(pvops 2.6.32.21) crash: cannot read/find cr3 page
by tom anderson
if I use a pvops domU kernel version 2.6.32.18 crash works fine. However if I use a pvops domU kernel version 2.6.32.21 I get the error messages:
crash: cannot find mfn 874307 (0xd5743) in page index
crash: cannot read/find cr3 page
Any suggestions as to what is wrong?
-Thomas
(1) domU information:
cat /proc/version
Linux version 2.6.32.21-1 (gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4) ) #1 SMP Tue Sep 7 15:40:55 PDT 2010
cat /proc/cmdline
root=UUID=ee471711-e94d-4ff1-973c-e45526adea25 ro crashkernel=384M-2G:64M,2G-:128M iommu=soft swiotlb=force console=hvc0,115200n8 ip=:127.0.255.255::::eth0:dhcp
cat /sys/kernel/kexec_crash_loaded
1
xen.cfg is using on_crash = 'coredump-restart'
core is generated by executing echo c > /proc/sysrq-trigger on the domU.
(2) core file is generated by the domU crash
-rw------- 1 root root 2145653832 Sep 9 09:42 /var/xen/dump/xyz/2010-0909-0942.13-xyz.1.core
(3) crash information
crash-d99 /boot/vmlinux-2.6.32.21-1 /var/xen/dump/xyz/2010-0909-0942.13-xyz.1.core
crash 5.0.7
...
crash: /var/xen/dump/xyz/2010-0909-0942.13-xyz.1.core: not a netdump ELF dumpfile
crash: /var/xen/dump/xyz/2010-0909-0942.13-xyz.1.core: not a kdump ELF dumpfile
flags: 109 (XENDUMP_LOCAL|XC_CORE_ELF|XC_CORE_P2M_CREATE)
xfd: 3
page_size: 4096
ofp: 0
page: 1d67c70
panic_pc: 0
panic_sp: 0
accesses: 0
cache_hits: 0
last_pfn: -1
redundant: 0
poc[5000]: 1d68c80 (none used)
xc_save:
nr_pfns: 0 (0x0)
vmconfig_size: 0 (0x0)
vmconfig_buf: 0
p2m_frame_list: 0 (none)
pfns_not: 0
pfns_not_offset: 0
vcpu_ctxt_offset: 0
shared_info_page_offset: 0
region_pfn_type: 0
batch_count: 0
batch_offsets: 0 (none)
ia64_version: 0
ia64_page_offsets: 0 (none)
xc_core:
header:
xch_magic: f00febed (XC_CORE_MAGIC)
xch_nr_vcpus: 7
xch_nr_pages: 521792 (0x7f640)
xch_ctxt_offset: 1896 (0x768)
xch_index_offset: 2137305088 (0x7f64b000)
xch_pages_offset: 45056 (0xb000)
elf_class: ELFCLASS64
elf_strtab_offset: 2145653760 (0x7fe41400)
format_version: 0000000000000001
shared_info_offset: 38072 (0x94b8)
elf_index_pfn[128]:
0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1
0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1
0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1 0:-1
last_batch:
index: 0 (0 - 0)
accesses: 0
duplicates: 0
elf32: 0
elf64: 1d67040
p2m_frames: 0
p2m_frame_index_list:
Elf64_Ehdr:
e_ident: \177ELF
e_ident[EI_CLASS]: 2 (ELFCLASS64)
e_ident[EI_DATA]: 1 (ELFDATA2LSB)
e_ident[EI_VERSION]: 1 (EV_CURRENT)
e_ident[EI_OSABI]: 0 (ELFOSABI_SYSV)
e_ident[EI_ABIVERSION]: 1
e_type: 4 (ET_CORE)
e_machine: 62 (EM_X86_64)
e_version: 1 (EV_CURRENT)
e_entry: 0
e_phoff: 0
e_shoff: 40
e_flags: 0
e_ehsize: 40
e_phentsize: 38
e_phnum: 0
e_shentsize: 40
e_shnum: 7
e_shstrndx: 1
Elf64_Shdr:
sh_name: 0 ""
sh_type: 0 (SHT_NULL)
sh_flags: 0
sh_addr: 0
sh_offset: 0
sh_size: 0
sh_link: 0
sh_info: 0
sh_addralign: 0
sh_entsize: 0
Elf64_Shdr:
sh_name: 1 ".shstrtab"
sh_type: 3 (SHT_STRTAB)
sh_flags: 0
sh_addr: 0
sh_offset: 7fe41400
sh_size: 48
sh_link: 0
sh_info: 0
sh_addralign: 0
sh_entsize: 0
.shstrtab
.note.Xen
.xen_prstatus
.xen_shared_info
.xen_pages
.xen_p2m
Elf64_Shdr:
sh_name: b ".note.Xen"
sh_type: 7 (SHT_NOTE)
sh_flags: 0
sh_addr: 0
sh_offset: 200
sh_size: 568
sh_link: 0
sh_info: 0
sh_addralign: 0
sh_entsize: 0
namesz: 4
descz: 0
type: 2000000 (XEN_ELFNOTE_DUMPCORE_NONE)
name: Xen
(empty)
namesz: 4
descz: 32
type: 2000001 (XEN_ELFNOTE_DUMPCORE_HEADER)
name: Xen
00000000f00febed 0000000000000007
000000000007f640 0000000000001000
namesz: 4
descz: 1280
type: 2000002 (XEN_ELFNOTE_DUMPCORE_XEN_VERSION)
0000000000000004 0000000000000000
ff003463722d312e 2400000000000008
7372657620636367 2e342e34206e6f69
746e756255282033 2d332e342e342075
3575746e75627534 ffff82c480002029
0000000000000001 0000000001aa0660
ffff006567646562 0000000000000001
006d6f632e69736c ffff880000000001
ffff82c480367c0e ffff82c400000100
20677541206e7553 34343a3930203120
205444502039323a ffff820030313032
2d302e332d6e6578 782034365f363878
782d302e332d6e65 68207032335f3638
782d302e332d6d76 76682032335f3638
38782d302e332d6d 7668207032335f36
38782d302e332d6d 0000002034365f36
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
0000000000000000 0000000000000000
616c696176616e75 7820343600656c62
782d302e332d6e65 68207032335f3638
782d302e332d6d76 76682032335f3638
38782d302e332d6d 7668207032335f36
ffff800000000000 0000000000001000
namesz: 4
descz: 8
type: 2000003 (XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION)
name: Xen
0000000000000001
Elf64_Shdr:
sh_type: 1 (SHT_PROGBITS)
sh_flags: 0
sh_addr: 0
sh_offset: 768
sh_size: 8d50
sh_link: 0
sh_info: 0
sh_addralign: 8
sh_entsize: 1430
Elf64_Shdr:
sh_name: 23 ".xen_shared_info"
sh_type: 1 (SHT_PROGBITS)
sh_flags: 0
sh_addr: 0
sh_offset: 94b8
sh_size: 1000
sh_link: 0
sh_info: 0
sh_addralign: 8
sh_entsize: 1000
Elf64_Shdr:
sh_name: 34 ".xen_pages"
sh_type: 1 (SHT_PROGBITS)
sh_flags: 0
sh_addr: 0
sh_offset: b000
sh_size: 7f640000
sh_link: 0
sh_info: 0
sh_addralign: 1000
sh_entsize: 1000
Elf64_Shdr:
sh_name: 3f ".xen_p2m"
sh_type: 1 (SHT_PROGBITS)
sh_flags: 0
sh_addr: 0
sh_offset: 7f64b000
sh_size: 7f6400
sh_link: 0
sh_info: 0
sh_addralign: 8
sh_entsize: 10
crash: pv_init_ops exists: ARCH_PVOPS
gdb /boot/vmlinux-2.6.32.21-1
GNU gdb (GDB) 7.0
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu"...
GETBUF(248 -> 0)
GETBUF(1500 -> 1)
FREEBUF(1)
FREEBUF(0)
<readmem: ffffffff81614800, KVADDR, "kernel_config_data", 32768, (ROE), 2fed090>
addr: ffffffff81614800 paddr: 1614800 cnt: 2048
GETBUF(248 -> 0)
FREEBUF(0)
MEMBER_OFFSET(vcpu_guest_context, ctrlreg): 4984
ctrlreg[0]: 80050033
ctrlreg[1]: d5742000
ctrlreg[2]: 0
ctrlreg[3]: d5743000
ctrlreg[4]: 2660
ctrlreg[5]: 0
ctrlreg[6]: 0
ctrlreg[7]: 0
crash: cannot find mfn 874307 (0xd5743) in page index
crash: cannot read/find cr3 page
14 years, 2 months
How to open 32 bit dom0 kdump....
by LI, Feng
Hey you all,
I am a kind of new to xen kernel dump analysis. Right now I have a fishy
problem. I have generated vmcore dump from dom0 which is 32 bit PAE linux
kernel running over 64 bit XEN kernel.
BUT, I have diverticulitis to open the vmcore file. If I open it with
crash32, crash32 would complain that vmcore is 64 bit image, while crash64
would complain dom0 kernel image is 32 bit.
Could any body solve my problem ?
Thanks in advance for any help and suggestions.
Yours
Kevin F LI
14 years, 2 months
Re: [Crash-utility] crash does not get proper backtrace?
by Dave Anderson
----- hutao(a)cn.fujitsu.com wrote:
> Hi,
>
> I got a problem where it seemed crash got a bad backtrace.
> The problem occurred under the following conditions:
> On a qemu guest system loading a module that stuck at
> the init function(say, call a function that did deadlooping),
> then dumped the guest by `virsh dump vm dumpfile', and run
> crash on the dumpfile.
>
> The module is:
>
> ---
> #include <linux/module.h>
>
> int endless_loop(void)
> {
> printk("endless loop\n");
> while (1);
>
> return 0;
> }
>
> int __init endless_init(void)
> {
> endless_loop();
>
> return 0;
> }
> module_init(endless_init);
>
> MODULE_LICENSE("GPL");
> ---
>
> crash bt command got:
>
> crash> bt -a
> PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
> #0 [ffffffff81601e08] schedule at ffffffff813e8a49
> #1 [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
> #2 [ffffffff81601ea0] need_resched at ffffffff8100970c
> #3 [ffffffff81601eb0] default_idle at ffffffff81009f6b
> #4 [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
>
> PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
> #0 [ffff88001e751dc8] schedule at ffffffff813e8a49
> #1 [ffff88001e751dd0] schedule at ffffffff813e8aec
> #2 [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
> #3 [ffff88001e751e90] retint_kernel at ffffffff813eab86
> #4 [ffff88001e751f20] do_one_initcall at ffffffff81000210
> #5 [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
> #6 [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
> RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
> RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
> RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
> RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
> R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
> R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
> ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
>
> Does it lose some function calls between do_one_initcall and retint_kernel?
> (endless_loop <- endless_init)
Your best bet is to use "bt -t" in a case such as that.
If there are no "starting hooks" for the backtrace code to use, then
it simply defaults to the RSP value left in the task->thread_struct->rsp,
and the RIP of the instruction following "__switch_to". Those will be
stale, because they represent the last time that the task blocked in
kernel space. In the case of your endless loop inside the kernel, there
is nothing for the crash utility to grab onto as the starting points because
the task is essentially "active". It's somewhat similar in nature to
using "bt -a" on a live system -- the tasks are running either in
kernel or user space, but do not have any "starting points" for the
backtrace code to latch onto, so it's not even allowed as a command.
Dave
14 years, 2 months
Re: [Crash-utility] crash does not get proper backtrace?
by Dave Anderson
----- "KAMEZAWA Hiroyuki" <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
> On Thu, 2 Sep 2010 08:44:12 -0400 (EDT)
> Dave Anderson <anderson(a)redhat.com> wrote:
>
> >
> > ----- hutao(a)cn.fujitsu.com wrote:
> >
> > > Hi,
> > >
> > > I got a problem where it seemed crash got a bad backtrace.
> > > The problem occurred under the following conditions:
> > > On a qemu guest system loading a module that stuck at
> > > the init function(say, call a function that did deadlooping),
> > > then dumped the guest by `virsh dump vm dumpfile', and run
> > > crash on the dumpfile.
> > >
> > > The module is:
> > >
> > > ---
> > > #include <linux/module.h>
> > >
> > > int endless_loop(void)
> > > {
> > > printk("endless loop\n");
> > > while (1);
> > >
> > > return 0;
> > > }
> > >
> > > int __init endless_init(void)
> > > {
> > > endless_loop();
> > >
> > > return 0;
> > > }
> > > module_init(endless_init);
> > >
> > > MODULE_LICENSE("GPL");
> > > ---
> > >
> > > crash bt command got:
> > >
> > > crash> bt -a
> > > PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
> > > #0 [ffffffff81601e08] schedule at ffffffff813e8a49
> > > #1 [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
> > > #2 [ffffffff81601ea0] need_resched at ffffffff8100970c
> > > #3 [ffffffff81601eb0] default_idle at ffffffff81009f6b
> > > #4 [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
> > >
> > > PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
> > > #0 [ffff88001e751dc8] schedule at ffffffff813e8a49
> > > #1 [ffff88001e751dd0] schedule at ffffffff813e8aec
> > > #2 [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
> > > #3 [ffff88001e751e90] retint_kernel at ffffffff813eab86
> > > #4 [ffff88001e751f20] do_one_initcall at ffffffff81000210
> > > #5 [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
> > > #6 [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
> > > RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
> > > RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
> > > RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
> > > RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
> > > R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
> > > R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
> > > ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
> > >
> > > Does it lose some function calls between do_one_initcall and retint_kernel?
> > > (endless_loop <- endless_init)
> >
> > Your best bet is to use "bt -t" in a case such as that.
> >
> > If there are no "starting hooks" for the backtrace code to use, then
> > it simply defaults to the RSP value left in the task->thread_struct->rsp,
> > and the RIP of the instruction following "__switch_to". Those will be
> > stale, because they represent the last time that the task blocked in
> > kernel space. In the case of your endless loop inside the kernel, there
> > is nothing for the crash utility to grab onto as the starting points because
> > the task is essentially "active". It's somewhat similar in nature to
> > using "bt -a" on a live system -- the tasks are running either in
> > kernel or user space, but do not have any "starting points" for the
> > backtrace code to latch onto, so it's not even allowed as a command.
> >
>
> Hmm. but, IIUC, vmcore on the real host (not on virtual machine) taken by kdump
> can show endless_loop(). Is it because kdump() reads paniced-host-image ? And
> we should take vmcore generated by "virsh dump" as
> - "it's just a live dump image and there are no guarantee of synchronous register
> inforamtion. If you take care, please freeze kernel by some switch".
>
> Can SIGSTOP or somethig to qemu will help us to take synchronous snapshot of registers ?
Kdump works because the shutdown sends an NMI to each cpu, leaving an obvious
shutdown trail that can be tracked from the NMI stack back to the process stack.
You could also try using alt-sysrq-c on the guest prior to taking the virsh dump
from the host.
Dave
14 years, 2 months
Re: [Crash-utility] crash does not get proper backtrace?
by Dave Anderson
----- hutao(a)cn.fujitsu.com wrote:
> On Thu, Sep 02, 2010 at 08:55:38AM -0400, Dave Anderson wrote:
> >
> > ----- hutao(a)cn.fujitsu.com wrote:
> >
> > > On Thu, Sep 02, 2010 at 03:46:00PM +0800, hutao(a)cn.fujitsu.com
> wrote:
> > > > Hi,
> > > >
> > > > I got a problem where it seemed crash got a bad backtrace.
> > > > The problem occurred under the following conditions:
> > > > On a qemu guest system loading a module that stuck at
> > > > the init function(say, call a function that did deadlooping),
> > > > then dumped the guest by `virsh dump vm dumpfile', and run
> > > > crash on the dumpfile.
> > > >
> > > > The module is:
> > > >
> > > > ---
> > > > #include <linux/module.h>
> > > >
> > > > int endless_loop(void)
> > > > {
> > > > printk("endless loop\n");
> > > > while (1);
> > > >
> > > > return 0;
> > > > }
> > > >
> > > > int __init endless_init(void)
> > > > {
> > > > endless_loop();
> > > >
> > > > return 0;
> > > > }
> > > > module_init(endless_init);
> > > >
> > > > MODULE_LICENSE("GPL");
> > > > ---
> > > >
> > > > crash bt command got:
> > > >
> > > > crash> bt -a
> > > > PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
> > > > #0 [ffffffff81601e08] schedule at ffffffff813e8a49
> > > > #1 [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
> > > > #2 [ffffffff81601ea0] need_resched at ffffffff8100970c
> > > > #3 [ffffffff81601eb0] default_idle at ffffffff81009f6b
> > > > #4 [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
> > > >
> > > > PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
> > > > #0 [ffff88001e751dc8] schedule at ffffffff813e8a49
> > > > #1 [ffff88001e751dd0] schedule at ffffffff813e8aec
> > > > #2 [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
> > > > #3 [ffff88001e751e90] retint_kernel at ffffffff813eab86
> > > > #4 [ffff88001e751f20] do_one_initcall at ffffffff81000210
> > > > #5 [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
> > > > #6 [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
> > > > RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
> > > > RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
> > > > RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
> > > > RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
> > > > R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
> > > > R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
> > > > ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
> > > >
> > > > Does it lose some function calls between do_one_initcall and retint_kernel?
> > > > (endless_loop <- endless_init)
> > > >
> > >
> > > In addition, if we don't stick in the init function (there is still a deadloop
> > > somewhere in module but triggered by, say, reading a /proc file) then the backtrace
> > > outputed by crash is correct.
> >
> > When you say "correct", I presume that you see your module functions as frames.
> > But if you also see the backtrace starting with "schedule", then it's just luck
> > that the backtrace bumped into your module functions. It just so happened that
> > when walking back from schedule(), it "mistakenly" stumbled upon your module's
> > functions.
> >
> > In the example above, I presume that when trying to backtrace from retint_kernel(),
> > it stepped over your module's "loop" functions that were called via do_one_initcall().
> > That's why I suggest that you should probably see them on the kernel stack in
> > between ffff88001e751e90 and ffff88001e751f20 if you use "bt -t". That is what
> > "bt -t" is for -- the "bt" command is never guaranteed to be correct.
>
> Nothing new in between ffff88001e751e90 and ffff88001e751f20 with `bt -at':
>
> crash> bt -at
> PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
> START: schedule at ffffffff813e8a49
> [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
> [ffffffff81601e60] __atomic_notifier_call_chain at ffffffff813ed799
> [ffffffff81601ea0] need_resched at ffffffff8100970c
> [ffffffff81601eb0] default_idle at ffffffff81009f6b
> [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
> [ffffffff81601f10] rest_init at ffffffff813d72ec
> [ffffffff81601f30] start_kernel at ffffffff816e1d77
> [ffffffff81601f40] command_line at ffffffff81718e90
> [ffffffff81601f70] x86_64_start_reservations at ffffffff816e12ac
> [ffffffff81601f90] x86_64_start_kernel at ffffffff816e13a8
>
> PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
> START: schedule at ffffffff813e8a49
> [ffff88001e751dd0] schedule at ffffffff813e8aec
> [ffff88001e751e40] get_parent_ip at ffffffff8102f193
> [ffff88001e751e60] sub_preempt_count at ffffffff813ed62c
> [ffff88001e751e70] need_resched at ffffffff8102a975
> [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
> [ffff88001e751e90] retint_kernel at ffffffff813eab86
> [ffff88001e751f20] do_one_initcall at ffffffff81000210
> [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
> [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
> RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
> RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
> RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
> RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
> R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
> R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
> ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
> crash>
>
>
> This is the related part of output of `bt -ar' in case you're
> interested:
>
> ffff88001e751e60: sub_preempt_count+146 ffff88001e751e78
> ffff88001e751e70: need_resched+30 ffff88001e751e88
> ffff88001e751e80: preempt_schedule_irq+82 ffff88001e751f18
> ffff88001e751e90: retint_kernel+38 ffff88001e751dd8
> ffff88001e751ea0: 0000000000000000 0000000000000004
> ffff88001e751eb0: 0000000000000000 ffff88001e751fd8
> ffff88001e751ec0: 0000000000000000 0000000000000001
> ffff88001e751ed0: 0000000000000000 ffffffffa00f9000
> ffff88001e751ee0: ffffffffffffff10 ffffffffa00f9004
> ffff88001e751ef0: 0000000000000010 0000000000000246
> ffff88001e751f00: ffff88001e751f18 0000000000000018
> ffff88001e751f10: ffff8800ffffffff ffff88001e751f48
> ffff88001e751f20: do_one_initcall+122 __this_module
> ffff88001e751f30: 0000000000000000 0000000000020000
> ffff88001e751f40: 0000000000b96030 ffff88001e751f78
> ffff88001e751f50: sys_init_module+196 0000000000b96010
> ffff88001e751f60: 00000000000163da 00007fff67a44985
> ffff88001e751f70: 00000000000163da 0000000000b96010
> ffff88001e751f80: system_call_fastpath+22 0000000000000202
> ffff88001e751f90: 00007f761bb4b140 0000000000080000
> ffff88001e751fa0: 0000000000010011 00000000000000af
> ffff88001e751fb0: 0000000000020010 0000000000b96010
> ffff88001e751fc0: 00000000000163da 0000000000b96030
> ffff88001e751fd0: 00000000000000af 00007f761bb58b7a
> ffff88001e751fe0: 0000000000000033 0000000000010206
> ffff88001e751ff0: 00007fff67a43120 000000000000002b
>
> and `sym -m endless':
>
> crash> sym -m endless
> ffffffffa00f6000 MODULE START: endless
> ffffffffa00f6000 (?) endless_loop
> ffffffffa00f6030 (r) __ksymtab_endless_loop
> ffffffffa00f6040 (r) __kstrtab_endless_loop
> ffffffffa00f6050 (d) __this_module
> ffffffffa00f6345 MODULE END: endless
> crash>
Well in that case, there is no evidence left on the kernel stack
by the endless_loop() function, i.e., the return address back
to endless_loop() from from its call to printk().
Dave
14 years, 2 months
[PLEASE NOTE] LKML and the crash-utility mailing list
by Dave Anderson
Please do NOT clutter the LKML with crash utility failures that are caused by
recent changes to the upstream kernel.
The crash utility by definition will require constant updates in order to
handle the changes being made to the upstream kernel that affect subsystems
that crash depends upon.
It has always been that way, and always will, and is one of the major reasons
that I generally release a new version every month.
While I do appreciate reports on *this* mailing list that spotlight the specific
upstream kernel patch that causes breakage to the crash utility, it serves
no purpose to post those findings on the LKML. THEY ARE NOT KERNEL BUGS.
Thanks,
Dave
14 years, 2 months
Fwd: another crash failure with 2.6.36-rc3 vmcore
by CAI Qian
----- Forwarded Message -----
From: caiqian(a)redhat.com
To: "Dave Anderson" <anderson(a)redhat.com>
Cc: "linux-kernel" <linux-kernel(a)vger.kernel.org>, npiggin(a)kernel.dk, "tim c chen" <tim.c.chen(a)linux.intel.com>, ak(a)linux.intel.com, viro(a)zeniv.linux.org.uk
Sent: Saturday, September 4, 2010 7:29:38 PM GMT +08:00 Beijing / Chongqing / Hong Kong / Urumqi
Subject: another crash failure with 2.6.36-rc3 vmcore
> crash> mount -f
> VFSMOUNT SUPERBLK TYPE DEVNAME
> DIRNAME
> ffff88046e367e80 ffff880c6e4e9400 rootfs rootfs
> /
> mount: invalid kernel virtual address: 18278 type: "first list entry"
Dave, this failure is due to this commit,
commit 6416ccb7899960868f5016751fb81bf25213d24f
Author: Nick Piggin <npiggin(a)kernel.dk>
Date: Wed Aug 18 04:37:38 2010 +1000
fs: scale files_lock
fs: scale files_lock
Improve scalability of files_lock by adding per-cpu, per-sb files lists,
protected with an lglock. The lglock provides fast access to the per-cpu lists
to add and remove files. It also provides a snapshot of all the per-cpu lists
(although this is very slow).
One difficulty with this approach is that a file can be removed from the list
by another CPU. We must track which per-cpu list the file is on with a new
variale in the file struct (packed into a hole on 64-bit archs). Scalability
could suffer if files are frequently removed from different cpu's list.
However loads with frequent removal of files imply short interval between
adding and removing the files, and the scheduler attempts to avoid moving
processes too far away. Also, even in the case of cross-CPU removal, the
hardware has much more opportunity to parallelise cacheline transfers with N
cachelines than with 1.
A worst-case test of 1 CPU allocating files subsequently being freed by N CPUs
degenerates to contending on a single lock, which is no worse than before. When
more than one CPU are allocating files, even if they are always freed by
different CPUs, there will be more parallelism than the single-lock case.
Testing results:
On a 2 socket, 8 core opteron, I measure the number of times the lock is taken
to remove the file, the number of times it is removed by the same CPU that
added it, and the number of times it is removed by the same node that added it.
Booting: locks= 25049 cpu-hits= 23174 (92.5%) node-hits= 23945 (95.6%)
kbuild -j16 locks=2281913 cpu-hits=2208126 (96.8%) node-hits=2252674 (98.7%)
dbench 64 locks=4306582 cpu-hits=4287247 (99.6%) node-hits=4299527 (99.8%)
So a file is removed from the same CPU it was added by over 90% of the time.
It remains within the same node 95% of the time.
Tim Chen ran some numbers for a 64 thread Nehalem system performing a compile.
throughput
2.6.34-rc2 24.5
+patch 24.9
us sys idle IO wait (in %)
2.6.34-rc2 51.25 28.25 17.25 3.25
+patch 53.75 18.5 19 8.75
So significantly less CPU time spent in kernel code, higher idle time and
slightly higher throughput.
Single threaded performance difference was within the noise of microbenchmarks.
That is not to say penalty does not exist, the code is larger and more memory
accesses required so it will be slightly slower.
Cc: linux-kernel(a)vger.kernel.org
Cc: Tim Chen <tim.c.chen(a)linux.intel.com>
Cc: Andi Kleen <ak(a)linux.intel.com>
Signed-off-by: Nick Piggin <npiggin(a)kernel.dk>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
diff --git a/fs/file_table.c b/fs/file_table.c
index 6f0e62e..a04bdd8 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -20,7 +20,9 @@
#include <linux/cdev.h>
#include <linux/fsnotify.h>
#include <linux/sysctl.h>
+#include <linux/lglock.h>
#include <linux/percpu_counter.h>
+#include <linux/percpu.h>
#include <linux/ima.h>
#include <asm/atomic.h>
@@ -32,7 +34,8 @@ struct files_stat_struct files_stat = {
.max_files = NR_FILE
};
-static __cacheline_aligned_in_smp DEFINE_SPINLOCK(files_lock);
+DECLARE_LGLOCK(files_lglock);
+DEFINE_LGLOCK(files_lglock);
/* SLAB cache for file structures */
static struct kmem_cache *filp_cachep __read_mostly;
@@ -336,30 +339,98 @@ void put_filp(struct file *file)
}
}
+static inline int file_list_cpu(struct file *file)
+{
+#ifdef CONFIG_SMP
+ return file->f_sb_list_cpu;
+#else
+ return smp_processor_id();
+#endif
+}
+
+/* helper for file_sb_list_add to reduce ifdefs */
+static inline void __file_sb_list_add(struct file *file, struct super_block *sb)
+{
+ struct list_head *list;
+#ifdef CONFIG_SMP
+ int cpu;
+ cpu = smp_processor_id();
+ file->f_sb_list_cpu = cpu;
+ list = per_cpu_ptr(sb->s_files, cpu);
+#else
+ list = &sb->s_files;
+#endif
+ list_add(&file->f_u.fu_list, list);
+}
+
+/**
+ * file_sb_list_add - add a file to the sb's file list
+ * @file: file to add
+ * @sb: sb to add it to
+ *
+ * Use this function to associate a file with the superblock of the inode it
+ * refers to.
+ */
void file_sb_list_add(struct file *file, struct super_block *sb)
{
- spin_lock(&files_lock);
- BUG_ON(!list_empty(&file->f_u.fu_list));
- list_add(&file->f_u.fu_list, &sb->s_files);
- spin_unlock(&files_lock);
+ lg_local_lock(files_lglock);
+ __file_sb_list_add(file, sb);
+ lg_local_unlock(files_lglock);
}
+/**
+ * file_sb_list_del - remove a file from the sb's file list
+ * @file: file to remove
+ * @sb: sb to remove it from
+ *
+ * Use this function to remove a file from its superblock.
+ */
void file_sb_list_del(struct file *file)
{
if (!list_empty(&file->f_u.fu_list)) {
- spin_lock(&files_lock);
+ lg_local_lock_cpu(files_lglock, file_list_cpu(file));
list_del_init(&file->f_u.fu_list);
- spin_unlock(&files_lock);
+ lg_local_unlock_cpu(files_lglock, file_list_cpu(file));
}
}
+#ifdef CONFIG_SMP
+
+/*
+ * These macros iterate all files on all CPUs for a given superblock.
+ * files_lglock must be held globally.
+ */
+#define do_file_list_for_each_entry(__sb, __file) \
+{ \
+ int i; \
+ for_each_possible_cpu(i) { \
+ struct list_head *list; \
+ list = per_cpu_ptr((__sb)->s_files, i); \
+ list_for_each_entry((__file), list, f_u.fu_list)
+
+#define while_file_list_for_each_entry \
+ } \
+}
+
+#else
+
+#define do_file_list_for_each_entry(__sb, __file) \
+{ \
+ struct list_head *list; \
+ list = &(sb)->s_files; \
+ list_for_each_entry((__file), list, f_u.fu_list)
+
+#define while_file_list_for_each_entry \
+}
+
+#endif
+
int fs_may_remount_ro(struct super_block *sb)
{
struct file *file;
-
/* Check that no files are currently opened for writing. */
- spin_lock(&files_lock);
- list_for_each_entry(file, &sb->s_files, f_u.fu_list) {
+ lg_global_lock(files_lglock);
+ do_file_list_for_each_entry(sb, file) {
struct inode *inode = file->f_path.dentry->d_inode;
/* File with pending delete? */
@@ -369,11 +440,11 @@ int fs_may_remount_ro(struct super_block *sb)
/* Writeable file? */
if (S_ISREG(inode->i_mode) && (file->f_mode & FMODE_WRITE))
goto too_bad;
- }
- spin_unlock(&files_lock);
+ } while_file_list_for_each_entry;
+ lg_global_unlock(files_lglock);
return 1; /* Tis' cool bro. */
too_bad:
- spin_unlock(&files_lock);
+ lg_global_unlock(files_lglock);
return 0;
}
@@ -389,8 +460,8 @@ void mark_files_ro(struct super_block *sb)
struct file *f;
retry:
- spin_lock(&files_lock);
- list_for_each_entry(f, &sb->s_files, f_u.fu_list) {
+ lg_global_lock(files_lglock);
+ do_file_list_for_each_entry(sb, f) {
struct vfsmount *mnt;
if (!S_ISREG(f->f_path.dentry->d_inode->i_mode))
continue;
@@ -406,12 +477,12 @@ retry:
file_release_write(f);
mnt = mntget(f->f_path.mnt);
/* This can sleep, so we can't hold the spinlock. */
- spin_unlock(&files_lock);
+ lg_global_unlock(files_lglock);
mnt_drop_write(mnt);
mntput(mnt);
goto retry;
- }
- spin_unlock(&files_lock);
+ } while_file_list_for_each_entry;
+ lg_global_unlock(files_lglock);
}
void __init files_init(unsigned long mempages)
@@ -431,5 +502,6 @@ void __init files_init(unsigned long mempages)
if (files_stat.max_files < NR_FILE)
files_stat.max_files = NR_FILE;
files_defer_init();
+ lg_lock_init(files_lglock);
percpu_counter_init(&nr_files, 0);
}
diff --git a/fs/super.c b/fs/super.c
index 9674ab2..8819e3a 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -54,7 +54,22 @@ static struct super_block *alloc_super(struct file_system_type *type)
s = NULL;
goto out;
}
+#ifdef CONFIG_SMP
+ s->s_files = alloc_percpu(struct list_head);
+ if (!s->s_files) {
+ security_sb_free(s);
+ kfree(s);
+ s = NULL;
+ goto out;
+ } else {
+ int i;
+
+ for_each_possible_cpu(i)
+ INIT_LIST_HEAD(per_cpu_ptr(s->s_files, i));
+ }
+#else
INIT_LIST_HEAD(&s->s_files);
+#endif
INIT_LIST_HEAD(&s->s_instances);
INIT_HLIST_HEAD(&s->s_anon);
INIT_LIST_HEAD(&s->s_inodes);
@@ -108,6 +123,9 @@ out:
*/
static inline void destroy_super(struct super_block *s)
{
+#ifdef CONFIG_SMP
+ free_percpu(s->s_files);
+#endif
security_sb_free(s);
kfree(s->s_subtype);
kfree(s->s_options);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 5e65add..76041b6 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -920,6 +920,9 @@ struct file {
#define f_vfsmnt f_path.mnt
const struct file_operations *f_op;
spinlock_t f_lock; /* f_ep_links, f_flags, no IRQ */
+#ifdef CONFIG_SMP
+ int f_sb_list_cpu;
+#endif
atomic_long_t f_count;
unsigned int f_flags;
fmode_t f_mode;
@@ -1334,7 +1337,11 @@ struct super_block {
struct list_head s_inodes; /* all inodes */
struct hlist_head s_anon; /* anonymous dentries for (nfs) exporting */
+#ifdef CONFIG_SMP
+ struct list_head __percpu *s_files;
+#else
struct list_head s_files;
+#endif
/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
struct list_head s_dentry_lru; /* unused dentry lru */
int s_nr_dentry_unused; /* # of dentry on lru */
14 years, 2 months
Fwd: crash failure with 2.6.36-rc3 vmcore
by CAI Qian
----- Forwarded Message -----
From: "CAI Qian" <caiqian(a)redhat.com>
To: "Dave Anderson" <anderson(a)redhat.com>, "guenter roeck" <guenter.roeck(a)ericsson.com>, tj(a)kernel.org, gregkh(a)suse.de
Cc: "linux-kernel" <linux-kernel(a)vger.kernel.org>
Sent: Saturday, September 4, 2010 2:01:06 PM GMT +08:00 Beijing / Chongqing / Hong Kong / Urumqi
Subject: crash failure with 2.6.36-rc3 vmcore
> crash> mod -S
>
> mod: invalid structure member offset: attribute_owner
> FILE: symbols.c LINE: 8577 FUNCTION: add_symbol_file_kallsyms()
>
> MODULE NAME SIZE OBJECT FILE
> ffffffffa000de60 dm_mod 76230
> /lib/modules/2.6.36-rc2-mm1-wqfix-mkdfix+/kernel/drivers/md/dm-mod.ko
> [/usr/bin/crash] error trace: 4affb0 => 4f3236 => 4f12b5 => 4e587a
>
> 4e587a: OFFSET_verify.clone.4+186
> 4f12b5: add_symbol_file+933
> 4f3236: load_module_symbols+566
> 4affb0: do_module_cmd+1264
>
> mod: invalid structure member offset: attribute_owner
> FILE: symbols.c LINE: 8577 FUNCTION: add_symbol_file_kallsyms()
This failure was due to this commit,
commit 6fd69dc578fa0b1bbc3aad70ae3af9a137211707
Author: Guenter Roeck <guenter.roeck(a)ericsson.com>
Date: Wed Jul 28 22:09:26 2010 -0700
sysfs: Remove owner field from sysfs struct attribute
Signed-off-by: Guenter Roeck <guenter.roeck(a)ericsson.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)suse.de>
diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
index 8bf06b6..3c92121 100644
--- a/include/linux/sysfs.h
+++ b/include/linux/sysfs.h
@@ -22,14 +22,8 @@ struct kobject;
struct module;
enum kobj_ns_type;
-/* FIXME
- * The *owner field is no longer used.
- * x86 tree has been cleaned up. The owner
- * attribute is still left for other arches.
- */
struct attribute {
const char *name;
- struct module *owner;
mode_t mode;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lock_class_key *key;
14 years, 2 months
Re: [Crash-utility] crash does not get proper backtrace?
by Dave Anderson
----- hutao(a)cn.fujitsu.com wrote:
> On Thu, Sep 02, 2010 at 03:46:00PM +0800, hutao(a)cn.fujitsu.com wrote:
> > Hi,
> >
> > I got a problem where it seemed crash got a bad backtrace.
> > The problem occurred under the following conditions:
> > On a qemu guest system loading a module that stuck at
> > the init function(say, call a function that did deadlooping),
> > then dumped the guest by `virsh dump vm dumpfile', and run
> > crash on the dumpfile.
> >
> > The module is:
> >
> > ---
> > #include <linux/module.h>
> >
> > int endless_loop(void)
> > {
> > printk("endless loop\n");
> > while (1);
> >
> > return 0;
> > }
> >
> > int __init endless_init(void)
> > {
> > endless_loop();
> >
> > return 0;
> > }
> > module_init(endless_init);
> >
> > MODULE_LICENSE("GPL");
> > ---
> >
> > crash bt command got:
> >
> > crash> bt -a
> > PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
> > #0 [ffffffff81601e08] schedule at ffffffff813e8a49
> > #1 [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
> > #2 [ffffffff81601ea0] need_resched at ffffffff8100970c
> > #3 [ffffffff81601eb0] default_idle at ffffffff81009f6b
> > #4 [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
> >
> > PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
> > #0 [ffff88001e751dc8] schedule at ffffffff813e8a49
> > #1 [ffff88001e751dd0] schedule at ffffffff813e8aec
> > #2 [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
> > #3 [ffff88001e751e90] retint_kernel at ffffffff813eab86
> > #4 [ffff88001e751f20] do_one_initcall at ffffffff81000210
> > #5 [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
> > #6 [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
> > RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
> > RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
> > RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
> > RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
> > R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
> > R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
> > ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
> >
> > Does it lose some function calls between do_one_initcall and retint_kernel?
> > (endless_loop <- endless_init)
> >
>
> In addition, if we don't stick in the init function (there is still a deadloop
> somewhere in module but triggered by, say, reading a /proc file) then the backtrace
> outputed by crash is correct.
When you say "correct", I presume that you see your module functions as frames.
But if you also see the backtrace starting with "schedule", then it's just luck
that the backtrace bumped into your module functions. It just so happened that
when walking back from schedule(), it "mistakenly" stumbled upon your module's
functions.
In the example above, I presume that when trying to backtrace from retint_kernel(),
it stepped over your module's "loop" functions that were called via do_one_initcall().
That's why I suggest that you should probably see them on the kernel stack in
between ffff88001e751e90 and ffff88001e751f20 if you use "bt -t". That is what
"bt -t" is for -- the "bt" command is never guaranteed to be correct.
Dave
14 years, 2 months
crash does not get proper backtrace?
by hutao@cn.fujitsu.com
Hi,
I got a problem where it seemed crash got a bad backtrace.
The problem occurred under the following conditions:
On a qemu guest system loading a module that stuck at
the init function(say, call a function that did deadlooping),
then dumped the guest by `virsh dump vm dumpfile', and run
crash on the dumpfile.
The module is:
---
#include <linux/module.h>
int endless_loop(void)
{
printk("endless loop\n");
while (1);
return 0;
}
int __init endless_init(void)
{
endless_loop();
return 0;
}
module_init(endless_init);
MODULE_LICENSE("GPL");
---
crash bt command got:
crash> bt -a
PID: 0 TASK: ffffffff81648020 CPU: 0 COMMAND: "swapper"
#0 [ffffffff81601e08] schedule at ffffffff813e8a49
#1 [ffffffff81601e18] apic_timer_interrupt at ffffffff8100344e
#2 [ffffffff81601ea0] need_resched at ffffffff8100970c
#3 [ffffffff81601eb0] default_idle at ffffffff81009f6b
#4 [ffffffff81601ec0] cpu_idle at ffffffff81001bf5
PID: 1088 TASK: ffff88001dda2d60 CPU: 1 COMMAND: "insmod"
#0 [ffff88001e751dc8] schedule at ffffffff813e8a49
#1 [ffff88001e751dd0] schedule at ffffffff813e8aec
#2 [ffff88001e751e80] preempt_schedule_irq at ffffffff813e8c90
#3 [ffff88001e751e90] retint_kernel at ffffffff813eab86
#4 [ffff88001e751f20] do_one_initcall at ffffffff81000210
#5 [ffff88001e751f50] sys_init_module at ffffffff8106b7ca
#6 [ffff88001e751f80] system_call_fastpath at ffffffff81002a82
RIP: 00007f761bb58b7a RSP: 00007fff67a43120 RFLAGS: 00010206
RAX: 00000000000000af RBX: ffffffff81002a82 RCX: 0000000000020010
RDX: 0000000000b96010 RSI: 00000000000163da RDI: 0000000000b96030
RBP: 0000000000b96010 R8: 0000000000010011 R9: 0000000000080000
R10: 00007f761bb4b140 R11: 0000000000000202 R12: 00000000000163da
R13: 00007fff67a44985 R14: 00000000000163da R15: 0000000000b96010
ORIG_RAX: 00000000000000af CS: 0033 SS: 002b
Does it lose some function calls between do_one_initcall and retint_kernel?
(endless_loop <- endless_init)
--
Thanks,
Hu Tao
14 years, 2 months