-----Original Message-----
Hi Kazu,
On Mon, Apr 04, 2022 at 07:44:09AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> -----Original Message-----
> > 1.) The bitmap of vmcore file can be very big in the server kernel panic,
> > such as over 256M.
> >
> > This patch uses mmap/madvise to improve the read speed
> > of the bitmap in the non-FLAT_FORMAT code path.
> >
> > Use MADV_SEQUENTIAL for madvise, it will trigger aggressively
> > readahead for reading the bitmap.
>
> I'm not familiar with the detailed behavior of madvise() and a little
> concerned about the sentence "may be freed soon after they are accessed"
> below:
>
> MADV_SEQUENTIAL
> Expect page references in sequential order. (Hence, pages in
> the given range can be aggressively read ahead, and may be freed
> soon after they are accessed.)
>
> MADV_WILLNEED
> Expect access in the near future. (Hence, it might be a good
> idea to read some pages ahead.)
>
> dd->bitmap is often used after the initialization process, is there
> no side-effect?
I am not sure.
IMHO, the side-effect can be ignored.
1.) In the original code, dd->bitmap is malloced,
so it is anonymous page in the kernel, can be swapped to disk.
2.) If we use mmap, dd->bitmap is page cache in the kernel.
It can be freed, and also can be read back again if we access it.
So I was a bit concerned that it might be freed earlier than we expect.
>
> So if MADV_WILLNEED has same or some good effect, I think it will be
> easier to accept. Any benchmark and information?
I tested the MADV_WILLNEED, it is really better then the MADV_SEQUENTIAL.
The following is the benchmark(tested three times for each case):
1.) the origin code costs about 29s to read out all the bitmap.
2.) Use the MADV_SEQUENTIAL, the read(caused by memcpy) costs
about 17s.
3.) Use the MADV_WILLNEED, the read(caused by the memcpy) costs
about 14s.
Good, thanks for testing. Then also considering its meaning,
MADV_WILLNEED would be better. I will reply to the v2 patch.
Thanks,
Kazu