At 02/27/2012 10:10 PM, Dave Anderson Wrote:
----- Original Message -----
>>
>> No. What I want to understand is how x86_64_calc_phys_base() will
>> be able to confidently recognize that an ELF file was qemu-generated,
>> so that it can then do the right thing.
> The guest is in first kernel:
> # readelf /tmp/vm2.save -l| grep 0xffffffff8
> LOAD 0x00000000010226b8 0xffffffff81000000 0x0000000001000000
> LOAD 0x00000000010226b8 0xffffffff81000000 0x0000000001000000
> LOAD 0x00000000010226b8 0xffffffff81000000 0x0000000001000000
> LOAD 0x00000000010226b8 0xffffffff81000000 0x0000000001000000
Why are there multiple segments describing the same virtual/physical region?
Is there one START_KERNEL_map segment for each vcpu? Are their FileSiz/MemSiz
values all the same?
Thanks for pointing this problem. I update the qemu side's patch:
The guest is in first kernel(vcpu: 4):
# readelf /tmp/vm2.save -l| grep 0xffffffff8
LOAD 0x000000000100f360 0xffffffff81000000 0x0000000001000000
The guest is in the second kernel(vcpu: 4)
# readelf /tmp/vm2.save2 -l| grep 0xffffffff8
LOAD 0x000000000100eb10 0xffffffff81000000 0x0000000001000000
LOAD 0x000000000400eb10 0xffffffff81000000 0x0000000004000000
The guest is in the second kernel(vcpu: 1)
[root@ghost ~]# readelf /tmp/vm2.save3 -l| grep 0xffffffff8
LOAD 0x0000000004001cfc 0xffffffff81000000 0x0000000004000000
> The guest is in the second kernel(vcpu > 1)
> ]# readelf /tmp/vm2.save2 -l| grep 0xffffffff8
> LOAD 0x0000000001017be0 0xffffffff81000000 0x0000000001000000
> LOAD 0x0000000001017be0 0xffffffff81000000 0x0000000001000000
> LOAD 0x0000000001017be0 0xffffffff81000000 0x0000000001000000
> LOAD 0x0000000004017be0 0xffffffff81000000 0x0000000004000000
Again, it's not clear why there are multiple segments with the same
same virtual address, but I'm guessing that the one segment that starts
at 0x0000000004000000 is associated with the second kernel, and the other
ones are for vcpus that ran in the first kernel?
> The guest is in the second kernel(vcpu = 1)
> [root@ghost ~]# readelf /tmp/vm2.save3 -l| grep 0xffffffff8
> LOAD 0x0000000004001e4c 0xffffffff81000000 0x0000000004000000
>
> I donot find differentiate qemu-genetated ELF headers from dump-generated ELF
> headers.
Kdump-generated vmcores cannot have multiple START_KERNEL_map segments.
But with dumps where (vpcu = 1), there could be confusion since it's not obvious
if START_KERNEL_map region belongs to the first or second kernel.
That being the case, I don't see how this can be supported cleanly by the crash'
utility unless there is a NOTE, or some other obvious identifier, that absolutely
confirms that the dumpfile was qemu-generated.
The note information stored in qemu-generated core:
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
NOTE 0x000000000000edd0 0x0000000000000000 0x0000000000000000
0x0000000000000590 0x0000000000000590 0
I think its format is the same as kdump's vmcore. Does kdump-generated core's
virtaddr is always 0? If so, What about to set virt_addr to -1 in qemu-generated
core?
Thanks
Wen Congyang
Dave