Dave Anderson <anderson(a)redhat.com> writes:
Here's an example. I ran your patch on a live 3.10-based kernel,
and
see these counts on the xfs-based caches:
[...]
ffff880035d54300 xfs_efd_item 400 0 300
15 8k
ffff880035d54200 xfs_da_state 480 0 272 8 16k
ffff880035d54100 xfs_btree_cur 208 0 312 8 8k
ffff880035d54000 xfs_log_ticket 184 3 682 31 4k
[...]
xfs_efd_item 180 300 400 20 2 : tunables 0 0
0 : slabdata 15 15 0
xfs_da_state 272 272 480 34 4 : tunables 0 0 0 : slabdata
8 8 0
xfs_btree_cur 312 312 208 39 2 : tunables 0 0 0 : slabdata
8 8 0
xfs_log_ticket 682 682 184 22 1 : tunables 0 0 0 : slabdata
31 31 0
crash>
which show 180, 272, 312 and 682 active counts.
Can you explain the discrepancy?
The active counts means just a non percpu slab, not the allocated.
The algorithm of active counts is
At v4.5/mm/slub.c:get_slabinfo():
for_each_kmem_cache_node(s, node, n) {
nr_slabs += node_nr_slabs(n);
nr_objs += node_nr_objs(n);
nr_free += count_partial(n, count_free);
}
sinfo->active_objs = nr_objs - nr_free;
count_partial() make sum of node->parital's "page.objects - page.inuse".
IOW, all percpu slabs are included as active_objs (i.e. "active objects"
== "all objects on node full slabs" + "all objects on percpu slabs").
Another example (not confirmed whether accurate way though) is
# pwd
/sys/kernel/slab/UDPv6
# cat alloc_fastpath alloc_slowpath free_fastpath free_slowpath; grep ^UDPv6
/proc/slabinfo
16173 C0=1087 C1=10007 C2=2101 C3=1364 C4=232 C5=725 C6=323 C7=334
8 C0=1 C1=1 C2=1 C3=1 C4=1 C5=1 C6=1 C7=1
16160 C0=1082 C1=10007 C2=2102 C3=1358 C4=228 C5=726 C6=322 C7=335
3 C0=1 C2=2
UDPv6 240 240 1088 30 8 : tunables 0 0 0 : slabdata
8 8 0
slabinfo says 240 actives. But sysfs's stats (need CONFIG_SLUB_STATS)
says, (16173 + 8) - (16160 + 3) == 18.
Thanks.
--
OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp>