Hi Dave,

In a test system I have booted the kernel with 'nokaslr' option. While trying to check phys_base and KASLR:

crash> help -m |grep phys_base

                phys_base: 0

     text hit rate: 66% (5171 of 7801)

crash> help -k | grep relocate

      relocate: 0  (KASLR offset: 0 / 0MB)

     text hit rate: 66% (5171 of 7801)

crash> 


I'm not sure if phys_base can be 0.

Question: Are these values fine in order to read memory images by specifying --phys_base=0 after booting main machine with 'nokaslr' option ?

Thank you,
Eshak

On Wed, Feb 7, 2018 at 10:49 AM, Dave Anderson <anderson@redhat.com> wrote:


----- Original Message -----
> Hi Dave,
>
> Thanks for the info.
> I've installed 7.2.0-1.fc28 and was able to run crash on live system.
>
> Unfortunately, KASLR is enabled.

Yes, I'm afraid that is unfortunate.  I don't know how you can determine
what the KASLR offset is, and without that, the dumpfile is pretty
much useless.

The best thing you can do is to prepare for the *next* crash by stashing
the phys_offset and KASLR offset values.  You also can boot the kernel with
"nokaslr" on the boot command line.

Dave




>
>
> text hit rate: 66% (5171 of 7801)
>
> help -m |grep phys_base
>
> phys_base: 10d000000
>
> text hit rate: 66% (5171 of 7801)
>
> help -k | grep relocate
>
> relocate: ffffffffe1000000 (KASLR offset: 1f000000 / 496MB)
>
> text hit rate: 66% (5171 of 7801)
> Is there any other info I can get from the vmem/vmss file like processes
> running at the time or task blocked on I/O or anything ?
>
> Thank you,
> Eshak
>
> On Wed, Feb 7, 2018 at 6:28 AM, Dave Anderson < anderson@redhat.com > wrote:
>
>
>
>
> ----- Original Message -----
> > That's fixed upstream. You'll have to download the crash sources from
> > github
> > and build the latest and greatest.
>
> It's possible that you might be able to run the Fedora 28 rawhide version
> here:
>
> Information for build crash-7.2.0-1.fc28
> https://koji.fedoraproject.org/koji/buildinfo?buildID=978501
>
> That version has the fix for the init_level4_pgt issue. I'm not sure
> whether you may run into anything else.
>
> Dave
>
>
> >
> >
> >
> >
> > Sent from my Verizon, Samsung Galaxy smartphone
> >
> > -------- Original message --------
> > From: Eshak < tmdeshak@gmail.com >
> > Date: 2/6/18 9:27 PM (GMT-05:00)
> > To: "Discussion list for crash utility usage, maintenance and development"
> > < crash-utility@redhat.com >
> > Subject: Re: [Crash-utility] linux_banner has garbage
> >
> > Hi Dave,
> >
> > I have /proc/kcore. But I'm getting 'cannot resolve 'init_level4_pgt'
> > error.
> >
> >
> >
> > [root@gt-Server2-gmt proc]# crash
> > /home/mfusion/vmem_vmss_jan26/usr/lib/debug/usr/lib/modules/4.14.11-coreos/vmlinux
> > /proc/kcore
> >
> >
> >
> >
> > crash 7.1.9-3.fc27
> >
> > Copyright (C) 2002-2016 Red Hat, Inc.
> >
> > Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
> >
> > Copyright (C) 1999-2006 Hewlett-Packard Co
> >
> > Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited
> >
> > Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
> >
> > Copyright (C) 2005, 2011 NEC Corporation
> >
> > Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
> >
> > Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
> >
> > This program is free software, covered by the GNU General Public License,
> >
> > and you are welcome to change it and/or distribute copies of it under
> >
> > certain conditions. Enter "help copying" to see the conditions.
> >
> > This program has absolutely no warranty. Enter "help warranty" for details.
> >
> >
> >
> > crash: /dev/tty: No such device or address
> >
> > NOTE: stdin: not a tty
> >
> >
> >
> >
> > GNU gdb (GDB) 7.6
> >
> > Copyright (C) 2013 Free Software Foundation, Inc.
> >
> > License GPLv3+: GNU GPL version 3 or later <
> > http://gnu.org/licenses/gpl.html
> > >
> >
> > This is free software: you are free to change and redistribute it.
> >
> > There is NO WARRANTY, to the extent permitted by law. Type "show copying"
> >
> > and "show warranty" for details.
> >
> > This GDB was configured as "x86_64-unknown-linux-gnu"...
> >
> >
> >
> >
> > WARNING: kernel relocated [496MB]: patching 69420 gdb minimal_symbol values
> >
> >
> >
> >
> > crash: cannot resolve "init_level4_pgt"
> >
> >
> >
> >
> > [root@gt-Server2-gmt proc]#
> > But I believe this is fixed in crash 7.2. I have raised one issue against
> > CoreOS to make crash 7.2 to be available in toolbox packages(
> > https://github.com/coreos/bugs/issues/2347 ).
> >
> > Meanwhile, Is there any workaround for this ?
> >
> > -Eshak
> >
> > On Tue, Feb 6, 2018 at 6:02 PM, anderson < anderson@prospeed.net > wrote:
> >
> >
> >
> >
> >
> > To run live, you need either /dev/mem, /proc/kcore, or the /dev/crash
> > driver.
> > You could try "crash vmlinux /proc/kcore" to see if it's available. If not,
> > you could try building the /dev/crash driver module. But I don't know if
> > CoreOS offers a kernel-devel package that you could build the driver
> > against? The driver source comes with the crash source package in the
> > memory_driver subdirectory.
> >
> > Dave
> >
> >
> > Sent from my Verizon, Samsung Galaxy smartphone
> >
> > -------- Original message --------
> > From: Eshak < tmdeshak@gmail.com >
> > Date: 2/6/18 8:35 PM (GMT-05:00)
> > To: "Discussion list for crash utility usage, maintenance and development"
> > <
> > crash-utility@redhat.com >
> > Cc: hfu < hfu@vmware.com >
> > Subject: Re: [Crash-utility] linux_banner has garbage
> >
> > Hi Dave,
> >
> > When trying to run crash live, I'm getting an error saying that /dev/mem is
> > not available.
> > I'm running crash from toolbox in a CoreOS VM. Is crash designed to run
> > from
> > a container ?
> >
> >
> >
> >
> >
> > [root@gt-Server2-gmt ~]# crash -d8
> > /home/user/vmem_vmss_jan26/usr/lib/debug/usr/lib/modules/4.14.11-coreos/vmlinux
> >
> >
> >
> >
> > crash 7.1.9-3.fc27
> >
> > Copyright (C) 2002-2016 Red Hat, Inc.
> >
> > Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
> >
> > Copyright (C) 1999-2006 Hewlett-Packard Co
> >
> > Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited
> >
> > Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
> >
> > Copyright (C) 2005, 2011 NEC Corporation
> >
> > Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
> >
> > Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
> >
> > This program is free software, covered by the GNU General Public License,
> >
> > and you are welcome to change it and/or distribute copies of it under
> >
> > certain conditions. Enter "help copying" to see the conditions.
> >
> > This program has absolutely no warranty. Enter "help warranty" for details.
> >
> >
> >
> > get_live_memory_source: /dev/mem
> >
> >
> >
> >
> > crash: /dev/mem: No such file or directory
> >
> >
> >
> >
> > [root@gt-Server2-gmt ~]#
> >
> > Thank you,
> > Eshak
> >
> > On Tue, Feb 6, 2018 at 3:05 PM, Eshak < tmdeshak@gmail.com > wrote:
> >
> >
> >
> > Thanks for the info Dave.
> > Unfortunately, I cannot run crash live on the machine because the VM is in
> > hung state right now. After resetting the VM(by tomorrow), will check for
> > KASLR and phys_base and try the suggested option.
> >
> > The complete output of crash is below:
> >
> >
> > [root@gt-Server2-gmt user]# crash -d8
> > /home/mfusion/vmem_vmss_jan26/usr/lib/debug/usr/lib/modules/4.14.11-coreos/vmlinux
> > /home/mfusion/vmem_vmss_jan26/usr/lib/modules/4.14.11-coreos/build/System.map
> > /home/mfusion/vmem_vmss_jan26/gt-Server2-gmt-612746ca.vmss
> >
> > crash 7.1.9-3.fc27
> > Copyright (C) 2002-2016 Red Hat, Inc.
> > Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
> > Copyright (C) 1999-2006 Hewlett-Packard Co
> > Copyright (C) 2005, 2006, 2011, 2012 Fujitsu Limited
> > Copyright (C) 2006, 2007 VA Linux Systems Japan K.K.
> > Copyright (C) 2005, 2011 NEC Corporation
> > Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.
> > Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.
> > This program is free software, covered by the GNU General Public License,
> > and you are welcome to change it and/or distribute copies of it under
> > certain conditions. Enter "help copying" to see the conditions.
> > This program has absolutely no warranty. Enter "help warranty" for details.
> >
> > crash: diskdump / compressed kdump: dump does not have panic dump header
> > crash: sadump: read dump device as media format
> > crash: sadump: does not have partition header
> > vmw: Header: id=bed2bed2 version=8 numgroups=95
> > vmw: Checkpoint is 64-bit
> > vmw: Group: Checkpoint offset=0x1dbc size=0x0x3ab.
> > vmw: Group: GuestVars offset=0x2167 size=0x0xa3.
> > vmw: Group: cpuid offset=0x220a size=0x0x5e0e.
> > vmw: Group: cpu offset=0x8018 size=0x0x615bb.
> > vmw: Group: BusMemSample offset=0x695d3 size=0x0x1c.
> > vmw: Group: UUIDVMX offset=0x695ef size=0x0x2e.
> > vmw: Group: StateLogger offset=0x6961d size=0x0x2.
> > vmw: Group: memory offset=0x6961f size=0x0xa8.
> > vmw: Item align_mask[0][0] => position=0x69633 size=0x4: 0000FFFF
> > vmw: Item regionsCount => position=0x69645 size=0x4: 00000002
> > vmw: Item regionPageNum[0] => position=0x6965c size=0x4: 00000000
> > vmw: Item regionPPN[0] => position=0x6966f size=0x4: 00000000
> > vmw: Item regionSize[0] => position=0x69683 size=0x4: 000C0000
> > vmw: Item regionPageNum[1] => position=0x6969a size=0x4: 000C0000
> > vmw: Item regionPPN[1] => position=0x696ad size=0x4: 00100000
> > vmw: Item regionSize[1] => position=0x696c1 size=0x4: 00E40000
> > vmw: Group: MStats offset=0x696c7 size=0x0x1936.
> > vmw: Group: Snapshot offset=0x6affd size=0x0x4b9c.
> > vmw: Group: pic offset=0x6fb99 size=0x0x511.
> > vmw: Group: FTCpt offset=0x700aa size=0x0x2.
> > vmw: Group: ide1:0 offset=0x700ac size=0x0x16e.
> > vmw: Group: scsi0:0 offset=0x7021a size=0x0x46.
> > vmw: Group: Migrate offset=0x70260 size=0x0x2.
> > vmw: Group: TimeTracker offset=0x70262 size=0x0x99.
> > vmw: Group: Backdoor offset=0x702fb size=0x0x2e.
> > vmw: Group: PCI offset=0x70329 size=0x0x13.
> > vmw: Group: Cs440bx offset=0x7033c size=0x0x40539.
> > vmw: Group: ExtCfgDevice offset=0xb0875 size=0x0x30.
> > vmw: Group: Floppy offset=0xb08a5 size=0x0x918c.
> > vmw: Group: AcpiNotify offset=0xb9a31 size=0x0x1b.
> > vmw: Group: vcpuHotPlug offset=0xb9a4c size=0x0xf5.
> > vmw: Group: devHP offset=0xb9b41 size=0x0x86.
> > vmw: Group: ACPIWake offset=0xb9bc7 size=0x0x1b.
> > vmw: Group: DevicesPowerOn offset=0xb9be2 size=0x0x2.
> > vmw: Group: PCIBridge0 offset=0xb9be4 size=0x0x272.
> > vmw: Group: PCIBridge4 offset=0xb9e56 size=0x0x48e.
> > vmw: Group: pciBridge4:1 offset=0xba2e4 size=0x0x48e.
> > vmw: Group: pciBridge4:2 offset=0xba772 size=0x0x48e.
> > vmw: Group: pciBridge4:3 offset=0xbac00 size=0x0x48e.
> > vmw: Group: pciBridge4:4 offset=0xbb08e size=0x0x48e.
> > vmw: Group: pciBridge4:5 offset=0xbb51c size=0x0x48e.
> > vmw: Group: pciBridge4:6 offset=0xbb9aa size=0x0x48e.
> > vmw: Group: pciBridge4:7 offset=0xbbe38 size=0x0x48e.
> > vmw: Group: PCIBridge5 offset=0xbc2c6 size=0x0x48e.
> > vmw: Group: pciBridge5:1 offset=0xbc754 size=0x0x48e.
> > vmw: Group: pciBridge5:2 offset=0xbcbe2 size=0x0x48e.
> > vmw: Group: pciBridge5:3 offset=0xbd070 size=0x0x48e.
> > vmw: Group: pciBridge5:4 offset=0xbd4fe size=0x0x48e.
> > vmw: Group: pciBridge5:5 offset=0xbd98c size=0x0x48e.
> > vmw: Group: pciBridge5:6 offset=0xbde1a size=0x0x48e.
> > vmw: Group: pciBridge5:7 offset=0xbe2a8 size=0x0x48e.
> > vmw: Group: PCIBridge6 offset=0xbe736 size=0x0x48e.
> > vmw: Group: pciBridge6:1 offset=0xbebc4 size=0x0x48e.
> > vmw: Group: pciBridge6:2 offset=0xbf052 size=0x0x48e.
> > vmw: Group: pciBridge6:3 offset=0xbf4e0 size=0x0x48e.
> > vmw: Group: pciBridge6:4 offset=0xbf96e size=0x0x48e.
> > vmw: Group: pciBridge6:5 offset=0xbfdfc size=0x0x48e.
> > vmw: Group: pciBridge6:6 offset=0xc028a size=0x0x48e.
> > vmw: Group: pciBridge6:7 offset=0xc0718 size=0x0x48e.
> > vmw: Group: PCIBridge7 offset=0xc0ba6 size=0x0x48e.
> > vmw: Group: pciBridge7:1 offset=0xc1034 size=0x0x48e.
> > vmw: Group: pciBridge7:2 offset=0xc14c2 size=0x0x48e.
> > vmw: Group: pciBridge7:3 offset=0xc1950 size=0x0x48e.
> > vmw: Group: pciBridge7:4 offset=0xc1dde size=0x0x48e.
> > vmw: Group: pciBridge7:5 offset=0xc226c size=0x0x48e.
> > vmw: Group: pciBridge7:6 offset=0xc26fa size=0x0x48e.
> > vmw: Group: pciBridge7:7 offset=0xc2b88 size=0x0x48e.
> > vmw: Group: vide offset=0xc3016 size=0x0x10bb7.
> > vmw: Group: SCSI0 offset=0xd3bcd size=0x0x1200f.
> > vmw: Group: VGA offset=0xe5bdc size=0x0x404aa.
> > vmw: Group: SVGA offset=0x126086 size=0x0xa6e6.
> > vmw: Group: Ethernet0 offset=0x13076c size=0x0x15312.
> > vmw: Group: hpet0 offset=0x145a7e size=0x0xb4a.
> > vmw: Group: ich7m.hpet offset=0x1465c8 size=0x0xb4a.
> > vmw: Group: vmci0 offset=0x147112 size=0x0x11b0.
> > vmw: Group: OEMDevice offset=0x1482c2 size=0x0x17.
> > vmw: Group: HotButton offset=0x1482d9 size=0x0x36.
> > vmw: Group: vsock offset=0x14830f size=0x0x33a.
> > vmw: Group: GuestMsg offset=0x148649 size=0x0xcf.
> > vmw: Group: GuestRpc offset=0x148718 size=0x0x3d2.
> > vmw: Group: Timer offset=0x148aea size=0x0x308.
> > vmw: Group: ACPI offset=0x148df2 size=0x0x3a.
> > vmw: Group: XPMode offset=0x148e2c size=0x0xb.
> > vmw: Group: Tools offset=0x148e37 size=0x0x2e.
> > vmw: Group: Tools Install offset=0x148e65 size=0x0x19.
> > vmw: Group: GuestAppMonitor offset=0x148e7e size=0x0xc3.
> > vmw: Group: MKSVMX offset=0x148f41 size=0x0x4cc.
> > vmw: Group: ToolsDeployPkg offset=0x14940d size=0x0x2.
> > vmw: Group: DMA offset=0x14940f size=0x0x3a4.
> > vmw: Group: BackdoorAPM offset=0x1497b3 size=0x0xd.
> > vmw: Group: CMOS offset=0x1497c0 size=0x0x27c.
> > vmw: Group: FlashRam offset=0x149a3c size=0x0x42058.
> > vmw: Group: smram offset=0x18ba94 size=0x0x2801b.
> > vmw: Group: A20 offset=0x1b3aaf size=0x0x10.
> > vmw: Group: backdoorAbsMouse offset=0x1b3abf size=0x0x13.
> > vmw: Group: Keyboard offset=0x1b3ad2 size=0x0x5ef.
> > vmw: Group: SIO offset=0x1b40c1 size=0x0x86.
> > vmw: Group: monitorLate offset=0x1b4147 size=0x0x2.
> > vmw: Group: MemoryHotplug offset=0x1b4149 size=0x0x9fd4.
> > vmw: Group: devices offset=0x1be11d size=0x0x3f.
> > vmw: Group: configdbFT offset=0x1be15c size=0x0x2.
> > vmw: Group: FeatureCompat offset=0x1be15e size=0x0xde3.
> > vmw: Group: NamespaceMgr offset=0x1bef41 size=0x0x2.
> > vmw: Memory dump is not part of this vmss file.
> > vmw: Try to locate the companion vmem file ...
> > vmw: vmem file: /home/mfusion/vmem_vmss_jan26/gt-mfusion2-gmt-612746ca.vmem
> >
> > readmem: read_vmware_vmss()
> > crash: /dev/tty: No such device or address
> > NOTE: stdin: not a tty
> >
> > crash: pv_init_ops exists: ARCH_PVOPS
> > gdb
> > /home/mfusion/vmem_vmss_jan26/usr/lib/debug/usr/lib/modules/4.14.11-coreos/vmlinux
> > GNU gdb (GDB) 7.6
> > Copyright (C) 2013 Free Software Foundation, Inc.
> > License GPLv3+: GNU GPL version 3 or later <
> > http://gnu.org/licenses/gpl.html
> > >
> > This is free software: you are free to change and redistribute it.
> > There is NO WARRANTY, to the extent permitted by law. Type "show copying"
> > and "show warranty" for details.
> > This GDB was configured as "x86_64-unknown-linux-gnu"...
> > GETBUF(280 -> 0)
> > GETBUF(1500 -> 1)
> > FREEBUF(1)
> > FREEBUF(0)
> > <readmem: ffffffff82042858, KVADDR, "page_offset_base", 8, (FOE), d24568>
> > <read_vmware_vmss: addr: ffffffff82042858 paddr: 2042858 cnt: 8>
> > <readmem: ffffffff81c18500, KVADDR, "kernel_config_data", 32768, (ROE),
> > 25df690>
> > <read_vmware_vmss: addr: ffffffff81c18500 paddr: 1c18500 cnt: 2816>
> > <read_vmware_vmss: addr: ffffffff81c19000 paddr: 1c19000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1a000 paddr: 1c1a000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1b000 paddr: 1c1b000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1c000 paddr: 1c1c000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1d000 paddr: 1c1d000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1e000 paddr: 1c1e000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c1f000 paddr: 1c1f000 cnt: 4096>
> > <read_vmware_vmss: addr: ffffffff81c20000 paddr: 1c20000 cnt: 1280>
> > WARNING: could not find MAGIC_START!
> > GETBUF(280 -> 0)
> > FREEBUF(0)
> > GETBUF(64 -> 0)
> > <readmem: ffffffff82126300, KVADDR, "possible", 64, (ROE), f64640>
> > <read_vmware_vmss: addr: ffffffff82126300 paddr: 2126300 cnt: 64>
> > cpu_possible_mask: cpus: 4 6 8 9 12 13 14 15 18 20 21 23 27 28 29 31 32 33
> > 36
> > 37 39 44 45 47 48 49 50 51 54 55 57 58 62 64 65 67 68 69 71 72 73 74 77 80
> > 81 83 84 85 88 89 92 101 102 105 107 108 109 111 112 120 122 124 125 126
> > 128
> > 133 137 138 139 140 141 144 146 148 149 150 151 152 153 154 155 156 157 159
> > 161 163 165 166 167 170 175 176 177 178 179 180 183 184 185 186 187 191 192
> > 194 195 196 198 201 205 207 208 209 212 217 219 221 225 227 228 230 232 233
> > 238 240 242 244 246 251 252 253 255 256 257 258 262 266 267 269 271 274 278
> > 280 281 283 286 287 290 293 294 295 296 297 298 299 300 301 302 310 312 313
> > 314 315 316 317 318 319 320 321 322 325 327 328 329 330 331 335 336 338 339
> > 340 342 345 351 353 354 355 356 357 358 359 360 361 362 364 365 366 367 368
> > 370 372 373 375 377 378 380 381 383 384 389 391 392 393 394 395 396 397 398
> > 404 405 406 409 410 413 414 416 417 418 425 427 428 429 430 431 432 433 437
> > 440 444 445 446 447 448 449 451 453 454 457 459 460 461 464 467 468 473 474
> > 475 476 477 478 480 482 483 486 488 489 490 491 492 493 495 496 500 501 502
> > 503 509 510 511
> > <readmem: ffffffff82126280, KVADDR, "present", 64, (ROE), f64640>
> > <read_vmware_vmss: addr: ffffffff82126280 paddr: 2126280 cnt: 64>
> > cpu_present_mask: cpus: 1 3 5 10 11 12 15 16 17 18 19 20 21 23 24 26 29 30
> > 32
> > 33 34 35 36 41 43 44 50 56 58 60 63 67 68 69 71 72 73 76 77 78 79 80 81 86
> > 90 91 93 96 97 99 104 105 106 108 111 114 117 118 119 121 123 124 127 129
> > 131 133 135 137 138 140 143 144 145 149 152 155 156 158 162 164 165 166 167
> > 169 172 173 174 175 178 180 183 189 191 192 193 194 196 197 199 201 203 206
> > 210 211 212 213 215 216 218 219 222 224 225 226 227 228 230 231 232 235 238
> > 240 244 245 246 250 251 253 255 256 258 259 263 264 267 268 271 272 274 275
> > 276 278 280 282 284 285 287 289 293 294 295 297 299 300 301 303 305 308 310
> > 312 316 317 318 320 322 323 324 326 328 329 331 332 334 335 340 342 343 344
> > 345 346 349 350 351 356 358 364 366 368 369 370 371 372 377 379 381 385 386
> > 387 394 397 400 401 404 406 407 409 412 413 414 415 416 417 419 420 421 422
> > 423 425 427 429 430 431 433 439 440 444 447 449 452 454 455 456 459 460 462
> > 463 465 467 468 469 470 472 476 478 480 482 483 484 485 486 488 489 490 491
> > 492 494 495 496 501 504 506 508 509 510 511
> > <readmem: ffffffff821262c0, KVADDR, "online", 64, (ROE), f64640>
> > <read_vmware_vmss: addr: ffffffff821262c0 paddr: 21262c0 cnt: 64>
> > cpu_online_mask: cpus: 0 3 4 8 10 11 12 13 15 17 21 22 27 28 29 32 33 35 44
> > 45 46 48 49 51 52 53 54 57 58 59 62 63 65 66 67 69 72 74 75 86 87 89 90 91
> > 96 97 99 100 101 106 107 110 111 113 115 116 117 122 123 125 128 129 130
> > 134
> > 135 136 142 143 145 147 149 150 153 156 157 160 161 163 164 166 167 168 173
> > 176 179 180 184 187 189 191 192 193 198 199 201 204 205 207 208 209 212 214
> > 215 216 217 218 221 222 224 227 228 234 235 236 239 241 242 244 245 248 250
> > 252 253 255 256 257 262 263 265 266 267 268 271 273 275 276 277 278 279 280
> > 288 289 291 293 295 296 297 300 304 305 306 308 310 312 313 317 318 320 325
> > 326 327 328 329 330 331 332 333 337 338 339 342 344 346 348 349 350 352 355
> > 356 361 368 369 375 380 381 382 383 385 387 389 390 391 394 395 397 398 399
> > 400 401 403 405 406 408 409 410 412 415 416 417 418 422 424 427 429 430 432
> > 434 436 438 441 443 444 451 452 456 460 462 464 466 469 473 474 476 479 480
> > 484 487 488 489 494 498 499 500 503 504 506 509 510
> > <readmem: ffffffff82126240, KVADDR, "active", 64, (ROE), f64640>
> > <read_vmware_vmss: addr: ffffffff82126240 paddr: 2126240 cnt: 64>
> > cpu_active_mask: cpus: 0 1 3 4 5 8 12 13 14 15 16 19 21 23 25 27 29 37 43
> > 45
> > 48 51 54 55 56 57 58 59 60 61 62 63 66 67 68 70 73 75 76 78 80 81 84 86 92
> > 94 95 96 98 99 100 103 104 105 109 110 111 114 115 122 124 126 127 128 130
> > 134 136 137 141 142 144 146 150 152 154 158 159 161 164 168 169 170 172 174
> > 177 178 180 181 183 189 192 195 198 202 204 207 208 212 213 214 215 216 217
> > 222 225 227 228 229 230 233 234 235 238 239 240 242 244 247 248 250 253 255
> > 256 258 260 267 268 270 273 274 275 276 279 280 282 286 287 294 296 300 302
> > 303 305 308 310 311 312 313 314 315 316 317 319 320 323 325 328 330 331 333
> > 336 337 340 341 342 344 348 349 350 351 352 353 354 355 359 363 364 366 368
> > 370 373 374 375 379 380 382 383 385 387 388 391 397 398 400 401 402 403 408
> > 409 410 412 414 416 417 418 420 423 430 431 439 445 446 448 449 450 451 458
> > 459 461 463 468 469 470 471 472 474 475 477 481 483 484 486 488 490 495 496
> > 497 500 501 503 504 505 509
> > FREEBUF(0)
> > <readmem: ffffffff82031aa0, KVADDR, "pv_init_ops", 8, (ROE), 7fffa1149090>
> > <read_vmware_vmss: addr: ffffffff82031aa0 paddr: 2031aa0 cnt: 8>
> > GETBUF(280 -> 0)
> > FREEBUF(0)
> > GETBUF(280 -> 0)
> > FREEBUF(0)
> > <readmem: ffffffff84710860, KVADDR, "shadow_timekeeper xtime_sec", 8,
> > (ROE),
> > 7fffa1149030>
> > <read_vmware_vmss: addr: ffffffff84710860 paddr: 4710860 cnt: 8>
> > xtime timespec.tv_sec: 54d151d0619456fc: (null)
> > <readmem: ffffffff82012304, KVADDR, "init_uts_ns", 390, (ROE), d0a7fc>
> > <read_vmware_vmss: addr: ffffffff82012304 paddr: 2012304 cnt: 390>
> > utsname:
> > sysname: (not printable)
> > nodename: (not printable)
> > release: (not printable)
> > version: (not printable)
> > machine: (not printable)
> > domainname: (not printable)
> > crash: cannot determine base kernel version
> > <readmem: ffffffff81c00100, KVADDR, "accessible check", 8, (ROE|Q),
> > 7fffa1146390>
> > <read_vmware_vmss: addr: ffffffff81c00100 paddr: 1c00100 cnt: 8>
> > <readmem: ffffffff81c00100, KVADDR, "read_string characters", 1499,
> > (ROE|Q),
> > 7fffa11466f0>
> > <read_vmware_vmss: addr: ffffffff81c00100 paddr: 1c00100 cnt: 1499>
> > linux_banner:
> > -ش????kB??C???Ã͞}&k?Xb?8/?ν?fF??&v;?Š???? ??
> > crash:
> > /home/user/vmem_vmss_jan26/usr/lib/modules/4.14.11-coreos/build/System.map
> > and /home/mfusion/vmem_vmss_jan26/gt-Server2-gmt-612746ca.vmss do not
> > match!
> >
> > Usage:
> >
> > crash [OPTION]... NAMELIST MEMORY-IMAGE[@ADDRESS] (dumpfile form)
> > crash [OPTION]... [NAMELIST] (live system form)
> >
> > Enter "crash -h" for details.
> > [root@gt-Server2-gmt user]# Please let me know if you need any further
> > information.
> > Will get back to you after checking KASLR and phys_base.
> >
> > Thank you,
> > Eshak
> >
> > On Tue, Feb 6, 2018 at 7:26 AM, Dave Anderson < anderson@redhat.com >
> > wrote:
> >
> >
> >
> >
> >
> > ----- Original Message -----
> > >
> > >
> > > ----- Original Message -----
> > > > Hello,
> > > >
> > > > We have a CoreOS VM(46 vCPU, 60GB RAM) freeze issue and hoping to find
> > > > out
> > > > what is going on in it at the time of freeze. When the VM froze, we
> > > > have
> > > > no
> > > > access to it via ssh and ping works sometimes but not always. So, we
> > > > suspended the VM which created vmem and vmss files.
> > > >
> > > > Since this is a CoreOS VM, I have used toolbox to install and run
> > > > crash.
> > > > When trying to read these files using crash utility, I'm getting the
> > > > below
> > > > message:
> > > >
> > > >
> > > >
> > > > <read_vmware_vmss: addr: ffffffff81c00100 paddr: 1c00100 cnt: 8>
> > > >
> > > > <readmem: ffffffff81c00100, KVADDR, "read_string characters", 1499,
> > > > (ROE|Q),
> > > > 7ffcf595cd70>
> > > >
> > > > <read_vmware_vmss: addr: ffffffff81c00100 paddr: 1c00100 cnt: 1499>
> > > >
> > > > linux_banner:
> > > >
> > > > -ش????kB??C???Ã͞}&k?Xb?8/?ν?fF??&v;?Š???? ??
> > >
> > > It would have been helpful to see the full crash -d# log, but I'm
> > > presuming
> > > that the utsname data and the cpus_[possible/present/online/active]_mask
> > > output
> > > that gets displayed just before the linux_banner output are also
> > > nonsensical?
> > >
> > > Typically this kind of problem is because phys_base cannot be determined,
> > > or if KASLR is enabled, the KASLR offset cannot be determined. Those two
> > > items are encoded into the dumpfile header for kdump dumpfiles, but there
> > > is no such information in a vmms dumpfile header.
> > >
> > > Can you run crash live on the machine? You can see whether the phys_base
> > > and KASLR offset are non-zero on the live system by entering:
> > >
> > > crash> help -m | grep phys_base
> > > phys_base: 129800000
> > > crash> help -k | grep relocate
> > > relocate: ffffffffcf400000 (KASLR offset: 30c00000 / 780MB)
> > > crash>
> > >
> > > If relocate is 0 (KASLR not enabled), then the phys_base value can
> > > be applied to your vmcore by entering, for example, "--machdep
> > > phys_base=780m"
> > > on the crash command line (using your phys_base).
> >
> > Sorry, my mistake -- it would be "--machdep phys_base=129800000".
> >
> > Dave
> >
> >
> > --
> > Crash-utility mailing list
> > Crash-utility@redhat.com
> > https://www.redhat.com/mailman/listinfo/crash-utility
> >
> >
> >
> > --
> > Crash-utility mailing list
> > Crash-utility@redhat.com
> > https://www.redhat.com/mailman/listinfo/crash-utility
> >
> >
> > --
> > Crash-utility mailing list
> > Crash-utility@redhat.com
> > https://www.redhat.com/mailman/listinfo/crash-utility
>
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility
>
>
> --
> Crash-utility mailing list
> Crash-utility@redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility

--
Crash-utility mailing list
Crash-utility@redhat.com
https://www.redhat.com/mailman/listinfo/crash-utility