On Friday 07 of January 2011 16:27:55 Dave Anderson wrote:
> ----- Original Message -----
>
> > The 'vcpu' field changed from a fixed array to a pointer to an array.
> > Change xen_hyper_store_domain_context to account for this change.
>
> Presuming this tests OK on older hypervisor dumps, this looks OK.
> Queued for the next release based upon testing.
Hi Dave,
older Xen hypervisors didn't have the "max_vcpus" field in struct domain,
so
there is in fact no change for them.
However, thinking about it some more, this might be affected by the increase
of XEN_HYPER_MAX_VIRT_CPUS. Although I haven't seen a failure, let me check
first whether a crash session on a dump from Xen 3.3 attempts to read past
array boundaries.
Ah, OK, I appreciate that. Is it something that can be segregated
by the xen version (like the virtual address change)? That would
make me feel much warmer and fuzzier...
Dave
Petr Tesarik
SUSE Linux
> Thanks,
> Dave
>
> > Signed-off-by: Petr Tesarik <ptesarik(a)suse.cz>
> > ---
> > xen_hyper.c | 40 +++++++++++++++++++++++++++++++++++++---
> > xen_hyper_defs.h | 1 +
> > 2 files changed, 38 insertions(+), 3 deletions(-)
> >
> > --- a/xen_hyper.c
> > +++ b/xen_hyper.c
> > @@ -219,6 +219,7 @@ xen_hyper_domain_init(void)
> >
> > XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_shutting_down, "domain",
> > "is_shutting_down");
> > XEN_HYPER_MEMBER_OFFSET_INIT(domain_is_shut_down, "domain",
> > "is_shut_down");
> > XEN_HYPER_MEMBER_OFFSET_INIT(domain_vcpu, "domain",
"vcpu");
> > + XEN_HYPER_MEMBER_OFFSET_INIT(domain_max_vcpus, "domain",
> > "max_vcpus");
> > XEN_HYPER_MEMBER_OFFSET_INIT(domain_arch, "domain",
"arch");
> >
> > XEN_HYPER_STRUCT_SIZE_INIT(arch_shared_info, "arch_shared_info");
> > @@ -1207,6 +1208,8 @@ struct xen_hyper_domain_context *
> > xen_hyper_store_domain_context(struct xen_hyper_domain_context
> > *dc,
> > ulong domain, char *dp)
> > {
> > + unsigned int max_vcpus;
> > + char *vcpup;
> > int i;
> >
> > dc->domain = domain;
> > @@ -1244,9 +1247,40 @@ xen_hyper_store_domain_context(struct xe
> > dc->domain_flags = XEN_HYPER_DOMF_ERROR;
> > }
> > dc->evtchn = ULONG(dp + XEN_HYPER_OFFSET(domain_evtchn));
> > - for (i = 0; i < XEN_HYPER_MAX_VIRT_CPUS; i++) {
> > - dc->vcpu[i] = ULONG(dp + XEN_HYPER_OFFSET(domain_vcpu) +
> > i*sizeof(void *));
> > - if (dc->vcpu[i]) XEN_HYPER_NR_VCPUS_IN_DOM(dc)++;
> > +
> > + if (XEN_HYPER_VALID_MEMBER(domain_max_vcpus)) {
> > + max_vcpus = UINT(dp + XEN_HYPER_OFFSET(domain_max_vcpus));
> > + } else {
> > + max_vcpus = XEN_HYPER_MAX_VIRT_CPUS;
> > + }
> > + if (MEMBER_TYPE("domain", "vcpu") == TYPE_CODE_ARRAY)
> > + vcpup = dp + XEN_HYPER_OFFSET(domain_vcpu);
> > + else {
> > + ulong vcpu_array = ULONG(dp + XEN_HYPER_OFFSET(domain_vcpu));
> > + if (vcpu_array && max_vcpus) {
> > + if (!(vcpup =
> > + malloc(max_vcpus * sizeof(void *)))) {
> > + error(FATAL, "cannot malloc VCPU array for domain %lx.",
> > + domain);
> > + }
> > + if (!readmem(vcpu_array, KVADDR,
> > + vcpup, max_vcpus * sizeof(void*),
> > + "VCPU array", RETURN_ON_ERROR)) {
> > + error(FATAL, "cannot read VCPU array for domain %lx.",
> > + domain);
> > + }
> > + } else {
> > + vcpup = NULL;
> > + }
> > + }
> > + if (vcpup) {
> > + for (i = 0; i < max_vcpus; i++) {
> > + dc->vcpu[i] = ULONG(vcpup + i*sizeof(void *));
> > + if (dc->vcpu[i]) XEN_HYPER_NR_VCPUS_IN_DOM(dc)++;
> > + }
> > + if (vcpup != dp + XEN_HYPER_OFFSET(domain_vcpu)) {
> > + free(vcpup);
> > + }
> > }
> >
> > return dc;
> > --- a/xen_hyper_defs.h
> > +++ b/xen_hyper_defs.h
> > @@ -674,6 +674,7 @@ struct xen_hyper_offset_table {
> > long domain_is_shutting_down;
> > long domain_is_shut_down;
> > long domain_vcpu;
> > + long domain_max_vcpus;
> > long domain_arch;
> > #ifdef IA64
> > /* mm_struct */
> >
> > --
> > Crash-utility mailing list
> > Crash-utility(a)redhat.com
> >
https://www.redhat.com/mailman/listinfo/crash-utility
>
> --
> Crash-utility mailing list
> Crash-utility(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/crash-utility