OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp> writes:
Dave Anderson <anderson(a)redhat.com> writes:
> Here's an example. I ran your patch on a live 3.10-based kernel, and
> see these counts on the xfs-based caches:
[...]
> ffff880035d54300 xfs_efd_item 400 0 300 15 8k
> ffff880035d54200 xfs_da_state 480 0 272 8 16k
> ffff880035d54100 xfs_btree_cur 208 0 312 8 8k
> ffff880035d54000 xfs_log_ticket 184 3 682 31 4k
[...]
> xfs_efd_item 180 300 400 20 2 : tunables 0 0 0 : slabdata
15 15 0
> xfs_da_state 272 272 480 34 4 : tunables 0 0 0 : slabdata
8 8 0
> xfs_btree_cur 312 312 208 39 2 : tunables 0 0 0 : slabdata
8 8 0
> xfs_log_ticket 682 682 184 22 1 : tunables 0 0 0 : slabdata
31 31 0
> crash>
>
> which show 180, 272, 312 and 682 active counts.
>
> Can you explain the discrepancy?
The active counts means just a non percpu slab, not the allocated.
The algorithm of active counts is
OK. More simpler proof, the following is enough to convince you?
The test code is here. 1) Make own slab cache as "test_memory". 2) And
allocate 3 objects on local cpu, 3) then one is freed on local cpu, one
is freed on remote cpu, and one is not freed.
struct test_memory {
unsigned long dummy;
};
struct kmem_cache *t_cachep;
struct test_memory *t_m[3];
int t_cpu;
static void t_free(void *info)
{
kmem_cache_free(t_cachep, info);
}
static int t_init(void)
{
unsigned int cpu;
t_cachep = kmem_cache_create("test_memory", sizeof(*t_m),
0, SLAB_NOLEAKTRACE /* prevent merge */,
NULL);
if (!t_cachep)
return -ENOMEM;
cpu = get_cpu();
t_cpu = cpu;
t_m[cpu] = kmem_cache_alloc(t_cachep, GFP_KERNEL); /* local */
t_m[!cpu] = kmem_cache_alloc(t_cachep, GFP_KERNEL); /* remote */
t_m[2] = kmem_cache_alloc(t_cachep, GFP_KERNEL); /* pin */
kmem_cache_free(t_cachep, t_m[cpu]); /* local */
smp_call_function_single(!cpu, t_free, t_m[!cpu], 1); /* remote */
put_cpu();
return 0;
}
module_init(t_init);
And I took vmcore after run above, then run crash for same vmcore
with/without my patch.
[Without my patch]
crash> p t_cpu
t_cpu = $1 = 1
crash> p t_m
t_m = $2 =
{0xffff880119b65008, 0xffff880119b65000, 0xffff880119b65010}
crash> kmem -S test_memory
CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE
ffff88011aec3b40 test_memory 8 2 512 1 4k
CPU 0 KMEM_CACHE_CPU:
ffff88011bbdc200
CPU 0 SLAB:
(empty)
CPU 0 PARTIAL:
(empty)
CPU 1 KMEM_CACHE_CPU:
ffff88011bddc200
CPU 1 SLAB:
SLAB MEMORY NODE TOTAL ALLOCATED FREE
ffffea000466d940 ffff880119b65000 0 512 2 510
FREE / [ALLOCATED]
ffff880119b65000 (cpu 1 cache)
[ffff880119b65008]
[ffff880119b65010]
ffff880119b65018 (cpu 1 cache)
ffff880119b65020 (cpu 1 cache)
[...]
local cpu free is showed as ffff880119b65000 - OK
remote cpu free is showed as [ffff880119b65008] - FAIL
pin is showed as [ffff880119b65010] - OK
[With my patch]
crash> kmem -S test_memory
CACHE NAME OBJSIZE ALLOCATED TOTAL SLABS SSIZE
ffff88011aec3b40 test_memory 8 1 512 1 4k
CPU 0 KMEM_CACHE_CPU:
ffff88011bbdc200
CPU 0 SLAB:
(empty)
CPU 0 PARTIAL:
(empty)
CPU 1 KMEM_CACHE_CPU:
ffff88011bddc200
CPU 1 SLAB:
SLAB MEMORY NODE TOTAL ALLOCATED FREE
ffffea000466d940 ffff880119b65000 0 512 1 511
FREE / [ALLOCATED]
ffff880119b65000 (cpu 1 cache)
ffff880119b65008 (cpu 1 cache)
[ffff880119b65010]
ffff880119b65018 (cpu 1 cache)
ffff880119b65020 (cpu 1 cache)
[...]
local cpu free is showed as ffff880119b65000 - OK
remote cpu free is showed as ffff880119b65008 - OK
pin is showed as [ffff880119b65010] - OK
Thanks.
--
OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp>