From nobody Thu Jan 26 22:11:42 2023 X-Original-To: dev-commits-src-branches@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4P2w0b3fSNz3c83s; Thu, 26 Jan 2023 22:11:43 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4P2w0b0wy0z3vHd; Thu, 26 Jan 2023 22:11:43 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1674771103; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ue0igSeemvnOoeBqIVwOrarBlloGKAeuDyY5FUq1Pa4=; b=cWu1Otp+vFvBugVyQMx1i5Cx/MYcy9dwnTtM34iOGyB8R8y6Zpzl0p7VD5MqmOS3LDW8xj ZqCcv6Hivwzcr9/Ju8fTzZ1GW/aZAwiW2xGv84edIbIfH/rm3v/X9wxuof9rEhPq1bslfZ Gsl4qK3PZqWLKGWy8Gkw4IXt9Vqdm3q3cKR9cUi9CsvOCfW1pDjO5YS8lGEYQcapzv+Il0 N0JQICjQROBjffKdbp6Bod6fkJfzLhbYXNm9g9Y4HJ6WYwxQLRWcQO0/3NYogvR8KV5jBH X2KAXohEgK6GOOGWP7tQSC1T/CUBCQhXZvqQLEHcyxS0cI8IWwfQ7CGmn/UL8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1674771103; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ue0igSeemvnOoeBqIVwOrarBlloGKAeuDyY5FUq1Pa4=; b=foFOm17td3SPnYphpNngE4tjSCWo8s4/qEkkGux/TOfin1onyAZFdALSlat2TqEVLv8w8U 0BBLyS//uudvU6jx0Bv0V8DtqPiaPDbRkGMnjxKZ5oGgxAy1DfVk932qypvcd93d6b0ZVt 00Ok3LpVGNqYDt4/KZcGT0L9N+JW7CqeJlBLYFXPOzfvBulZXR8T/TvC6nTYBQeoLgk58X KWCu+eFiNqjFaEDVj2JVHKVA5NShJjvvHOGaga+gj3o5IV/6ozsNEhwgQLAtYPdft6YF88 60gASG01uX8sf+xZGBvUT+w9pQgL8OrtBnWq/hjBWOqKSVYAoABkYEGwSpeXSA== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1674771103; a=rsa-sha256; cv=none; b=srkhFJJuJiE0LORNcTUnnSJFe7DDAAloHiLwxQFFBkoxRIEGRNvmS14GPgooPhb/TMj47S cgPXgyggeHTCW6c0IGOy7YV1Wbruy95qC2ypyyeWhW3CC71pR88EVBOuvJUcVGIwyJmI1E nA9pLDJ1byxl7SeIkumgTrYDXbOAM1582Tqfr5P+JOpcHjEBiRnX5DSgPX8c2GWoRLXLJ9 v4wABgUU1VizYOS0uTsQqaiu0lLWU88aDcndBOQG9H9VByrq+Q+rnS0pmUXFWqa/68xQb/ zWpQ0s25IrGvIkOE5pmFa8kmy5ZLFPKuHcTPyOgYVt1u5mlCrqFhaq9UThrZsA== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4P2w0Z61qHzm5V; Thu, 26 Jan 2023 22:11:42 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 30QMBg8h020794; Thu, 26 Jan 2023 22:11:42 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 30QMBgxY020710; Thu, 26 Jan 2023 22:11:42 GMT (envelope-from git) Date: Thu, 26 Jan 2023 22:11:42 GMT Message-Id: <202301262211.30QMBgxY020710@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: John Baldwin Subject: git: 6fc2d2dbe235 - stable/13 - vmm: Refactor storage of CPU-dependent per-vCPU data. List-Id: Commits to the stable branches of the FreeBSD src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-branches List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-branches@freebsd.org X-BeenThere: dev-commits-src-branches@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: jhb X-Git-Repository: src X-Git-Refname: refs/heads/stable/13 X-Git-Reftype: branch X-Git-Commit: 6fc2d2dbe2354f99c3973f1d855479d9dd65232e Auto-Submitted: auto-generated X-ThisMailContainsUnwantedMimeParts: N The branch stable/13 has been updated by jhb: URL: https://cgit.FreeBSD.org/src/commit/?id=6fc2d2dbe2354f99c3973f1d855479d9dd65232e commit 6fc2d2dbe2354f99c3973f1d855479d9dd65232e Author: John Baldwin AuthorDate: 2022-11-18 17:59:21 +0000 Commit: John Baldwin CommitDate: 2023-01-26 21:44:52 +0000 vmm: Refactor storage of CPU-dependent per-vCPU data. Rather than storing static arrays of per-vCPU data in the CPU-specific per-VM structure, adopt a more dynamic model similar to that used to manage CPU-specific per-VM data. That is, add new vmmops methods to init and cleanup a single vCPU. The init method returns a pointer that is stored in 'struct vcpu' as a cookie pointer. This cookie pointer is now passed to other vmmops callbacks in place of the integer index. The index is now only used in KTR traces and when calling back into the CPU-independent layer. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37151 (cherry picked from commit 1aa5150479bf35c90c6770e6ea90e8462cfb6bf9) --- sys/amd64/include/vmm.h | 24 +- sys/amd64/vmm/amd/svm.c | 606 ++++++++++++++++--------------- sys/amd64/vmm/amd/svm.h | 4 +- sys/amd64/vmm/amd/svm_msr.c | 21 +- sys/amd64/vmm/amd/svm_msr.h | 15 +- sys/amd64/vmm/amd/svm_softc.h | 34 +- sys/amd64/vmm/amd/vmcb.c | 80 ++--- sys/amd64/vmm/amd/vmcb.h | 23 +- sys/amd64/vmm/intel/vmx.c | 809 ++++++++++++++++++++++-------------------- sys/amd64/vmm/intel/vmx.h | 12 +- sys/amd64/vmm/intel/vmx_msr.c | 74 ++-- sys/amd64/vmm/intel/vmx_msr.h | 16 +- sys/amd64/vmm/vmm.c | 65 ++-- 13 files changed, 926 insertions(+), 857 deletions(-) diff --git a/sys/amd64/include/vmm.h b/sys/amd64/include/vmm.h index 62456fe9d12d..9f76eda9d8e8 100644 --- a/sys/amd64/include/vmm.h +++ b/sys/amd64/include/vmm.h @@ -167,27 +167,29 @@ typedef int (*vmm_init_func_t)(int ipinum); typedef int (*vmm_cleanup_func_t)(void); typedef void (*vmm_resume_func_t)(void); typedef void * (*vmi_init_func_t)(struct vm *vm, struct pmap *pmap); -typedef int (*vmi_run_func_t)(void *vmi, int vcpu, register_t rip, +typedef int (*vmi_run_func_t)(void *vmi, void *vcpui, register_t rip, struct pmap *pmap, struct vm_eventinfo *info); typedef void (*vmi_cleanup_func_t)(void *vmi); -typedef int (*vmi_get_register_t)(void *vmi, int vcpu, int num, +typedef void * (*vmi_vcpu_init_func_t)(void *vmi, int vcpu_id); +typedef void (*vmi_vcpu_cleanup_func_t)(void *vmi, void *vcpui); +typedef int (*vmi_get_register_t)(void *vmi, void *vcpui, int num, uint64_t *retval); -typedef int (*vmi_set_register_t)(void *vmi, int vcpu, int num, +typedef int (*vmi_set_register_t)(void *vmi, void *vcpui, int num, uint64_t val); -typedef int (*vmi_get_desc_t)(void *vmi, int vcpu, int num, +typedef int (*vmi_get_desc_t)(void *vmi, void *vcpui, int num, struct seg_desc *desc); -typedef int (*vmi_set_desc_t)(void *vmi, int vcpu, int num, +typedef int (*vmi_set_desc_t)(void *vmi, void *vcpui, int num, struct seg_desc *desc); -typedef int (*vmi_get_cap_t)(void *vmi, int vcpu, int num, int *retval); -typedef int (*vmi_set_cap_t)(void *vmi, int vcpu, int num, int val); +typedef int (*vmi_get_cap_t)(void *vmi, void *vcpui, int num, int *retval); +typedef int (*vmi_set_cap_t)(void *vmi, void *vcpui, int num, int val); typedef struct vmspace * (*vmi_vmspace_alloc)(vm_offset_t min, vm_offset_t max); typedef void (*vmi_vmspace_free)(struct vmspace *vmspace); -typedef struct vlapic * (*vmi_vlapic_init)(void *vmi, int vcpu); +typedef struct vlapic * (*vmi_vlapic_init)(void *vmi, void *vcpui); typedef void (*vmi_vlapic_cleanup)(void *vmi, struct vlapic *vlapic); typedef int (*vmi_snapshot_t)(void *vmi, struct vm_snapshot_meta *meta); typedef int (*vmi_snapshot_vcpu_t)(void *vmi, struct vm_snapshot_meta *meta, - int vcpu); -typedef int (*vmi_restore_tsc_t)(void *vmi, int vcpuid, uint64_t now); + void *vcpui); +typedef int (*vmi_restore_tsc_t)(void *vmi, void *vcpui, uint64_t now); struct vmm_ops { vmm_init_func_t modinit; /* module wide initialization */ @@ -197,6 +199,8 @@ struct vmm_ops { vmi_init_func_t init; /* vm-specific initialization */ vmi_run_func_t run; vmi_cleanup_func_t cleanup; + vmi_vcpu_init_func_t vcpu_init; + vmi_vcpu_cleanup_func_t vcpu_cleanup; vmi_get_register_t getreg; vmi_set_register_t setreg; vmi_get_desc_t getdesc; diff --git a/sys/amd64/vmm/amd/svm.c b/sys/amd64/vmm/amd/svm.c index fca3722ed7f4..dee88f11dce2 100644 --- a/sys/amd64/vmm/amd/svm.c +++ b/sys/amd64/vmm/amd/svm.c @@ -132,8 +132,8 @@ static VMM_STAT_AMD(VCPU_EXITINTINFO, "VM exits during event delivery"); static VMM_STAT_AMD(VCPU_INTINFO_INJECTED, "Events pending at VM entry"); static VMM_STAT_AMD(VMEXIT_VINTR, "VM exits due to interrupt window"); -static int svm_getdesc(void *arg, int vcpu, int reg, struct seg_desc *desc); -static int svm_setreg(void *arg, int vcpu, int ident, uint64_t val); +static int svm_getdesc(void *arg, void *vcpui, int reg, struct seg_desc *desc); +static int svm_setreg(void *arg, void *vcpui, int ident, uint64_t val); static __inline int flush_by_asid(void) @@ -283,18 +283,18 @@ svm_modresume(void) #ifdef BHYVE_SNAPSHOT int -svm_set_tsc_offset(struct svm_softc *sc, int vcpu, uint64_t offset) +svm_set_tsc_offset(struct svm_softc *sc, struct svm_vcpu *vcpu, uint64_t offset) { int error; struct vmcb_ctrl *ctrl; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); ctrl->tsc_offset = offset; - svm_set_dirty(sc, vcpu, VMCB_CACHE_I); - VCPU_CTR1(sc->vm, vcpu, "tsc offset changed to %#lx", offset); + svm_set_dirty(vcpu, VMCB_CACHE_I); + VCPU_CTR1(sc->vm, vcpu->vcpuid, "tsc offset changed to %#lx", offset); - error = vm_set_tsc_offset(sc->vm, vcpu, offset); + error = vm_set_tsc_offset(sc->vm, vcpu->vcpuid, offset); return (error); } @@ -382,26 +382,27 @@ svm_msr_rd_ok(uint8_t *perm_bitmap, uint64_t msr) } static __inline int -svm_get_intercept(struct svm_softc *sc, int vcpu, int idx, uint32_t bitmask) +svm_get_intercept(struct svm_softc *sc, struct svm_vcpu *vcpu, int idx, + uint32_t bitmask) { struct vmcb_ctrl *ctrl; KASSERT(idx >=0 && idx < 5, ("invalid intercept index %d", idx)); - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); return (ctrl->intercept[idx] & bitmask ? 1 : 0); } static __inline void -svm_set_intercept(struct svm_softc *sc, int vcpu, int idx, uint32_t bitmask, - int enabled) +svm_set_intercept(struct svm_softc *sc, struct svm_vcpu *vcpu, int idx, + uint32_t bitmask, int enabled) { struct vmcb_ctrl *ctrl; uint32_t oldval; KASSERT(idx >=0 && idx < 5, ("invalid intercept index %d", idx)); - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); oldval = ctrl->intercept[idx]; if (enabled) @@ -410,28 +411,30 @@ svm_set_intercept(struct svm_softc *sc, int vcpu, int idx, uint32_t bitmask, ctrl->intercept[idx] &= ~bitmask; if (ctrl->intercept[idx] != oldval) { - svm_set_dirty(sc, vcpu, VMCB_CACHE_I); - VCPU_CTR3(sc->vm, vcpu, "intercept[%d] modified " + svm_set_dirty(vcpu, VMCB_CACHE_I); + VCPU_CTR3(sc->vm, vcpu->vcpuid, "intercept[%d] modified " "from %#x to %#x", idx, oldval, ctrl->intercept[idx]); } } static __inline void -svm_disable_intercept(struct svm_softc *sc, int vcpu, int off, uint32_t bitmask) +svm_disable_intercept(struct svm_softc *sc, struct svm_vcpu *vcpu, int off, + uint32_t bitmask) { svm_set_intercept(sc, vcpu, off, bitmask, 0); } static __inline void -svm_enable_intercept(struct svm_softc *sc, int vcpu, int off, uint32_t bitmask) +svm_enable_intercept(struct svm_softc *sc, struct svm_vcpu *vcpu, int off, + uint32_t bitmask) { svm_set_intercept(sc, vcpu, off, bitmask, 1); } static void -vmcb_init(struct svm_softc *sc, int vcpu, uint64_t iopm_base_pa, +vmcb_init(struct svm_softc *sc, struct svm_vcpu *vcpu, uint64_t iopm_base_pa, uint64_t msrpm_base_pa, uint64_t np_pml4) { struct vmcb_ctrl *ctrl; @@ -439,8 +442,8 @@ vmcb_init(struct svm_softc *sc, int vcpu, uint64_t iopm_base_pa, uint32_t mask; int n; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); - state = svm_get_vmcb_state(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); + state = svm_get_vmcb_state(vcpu); ctrl->iopm_base_pa = iopm_base_pa; ctrl->msrpm_base_pa = msrpm_base_pa; @@ -465,7 +468,7 @@ vmcb_init(struct svm_softc *sc, int vcpu, uint64_t iopm_base_pa, * Intercept everything when tracing guest exceptions otherwise * just intercept machine check exception. */ - if (vcpu_trace_exceptions(sc->vm, vcpu)) { + if (vcpu_trace_exceptions(sc->vm, vcpu->vcpuid)) { for (n = 0; n < 32; n++) { /* * Skip unimplemented vectors in the exception bitmap. @@ -506,7 +509,7 @@ vmcb_init(struct svm_softc *sc, int vcpu, uint64_t iopm_base_pa, svm_enable_intercept(sc, vcpu, VMCB_CTRL2_INTCPT, VMCB_INTCPT_CLGI); svm_enable_intercept(sc, vcpu, VMCB_CTRL2_INTCPT, VMCB_INTCPT_SKINIT); svm_enable_intercept(sc, vcpu, VMCB_CTRL2_INTCPT, VMCB_INTCPT_ICEBP); - if (vcpu_trap_wbinvd(sc->vm, vcpu)) { + if (vcpu_trap_wbinvd(sc->vm, vcpu->vcpuid)) { svm_enable_intercept(sc, vcpu, VMCB_CTRL2_INTCPT, VMCB_INTCPT_WBINVD); } @@ -559,10 +562,6 @@ static void * svm_init(struct vm *vm, pmap_t pmap) { struct svm_softc *svm_sc; - struct svm_vcpu *vcpu; - vm_paddr_t msrpm_pa, iopm_pa, pml4_pa; - int i; - uint16_t maxcpus; svm_sc = malloc(sizeof (*svm_sc), M_SVM, M_WAITOK | M_ZERO); @@ -576,7 +575,7 @@ svm_init(struct vm *vm, pmap_t pmap) panic("contigmalloc of SVM IO bitmap failed"); svm_sc->vm = vm; - svm_sc->nptp = (vm_offset_t)vtophys(pmap->pm_pmltop); + svm_sc->nptp = vtophys(pmap->pm_pmltop); /* * Intercept read and write accesses to all MSRs. @@ -611,23 +610,28 @@ svm_init(struct vm *vm, pmap_t pmap) /* Intercept access to all I/O ports. */ memset(svm_sc->iopm_bitmap, 0xFF, SVM_IO_BITMAP_SIZE); - iopm_pa = vtophys(svm_sc->iopm_bitmap); - msrpm_pa = vtophys(svm_sc->msr_bitmap); - pml4_pa = svm_sc->nptp; - maxcpus = vm_get_maxcpus(svm_sc->vm); - for (i = 0; i < maxcpus; i++) { - vcpu = svm_get_vcpu(svm_sc, i); - vcpu->vmcb = malloc_aligned(sizeof(struct vmcb), PAGE_SIZE, - M_SVM, M_WAITOK | M_ZERO); - vcpu->nextrip = ~0; - vcpu->lastcpu = NOCPU; - vcpu->vmcb_pa = vtophys(vcpu->vmcb); - vmcb_init(svm_sc, i, iopm_pa, msrpm_pa, pml4_pa); - svm_msr_guest_init(svm_sc, i); - } return (svm_sc); } +static void * +svm_vcpu_init(void *arg, int vcpuid) +{ + struct svm_softc *sc = arg; + struct svm_vcpu *vcpu; + + vcpu = malloc(sizeof(*vcpu), M_SVM, M_WAITOK | M_ZERO); + vcpu->vcpuid = vcpuid; + vcpu->vmcb = malloc_aligned(sizeof(struct vmcb), PAGE_SIZE, M_SVM, + M_WAITOK | M_ZERO); + vcpu->nextrip = ~0; + vcpu->lastcpu = NOCPU; + vcpu->vmcb_pa = vtophys(vcpu->vmcb); + vmcb_init(sc, vcpu, vtophys(sc->iopm_bitmap), vtophys(sc->msr_bitmap), + sc->nptp); + svm_msr_guest_init(sc, vcpu); + return (vcpu); +} + /* * Collateral for a generic SVM VM-exit. */ @@ -720,8 +724,8 @@ svm_inout_str_count(struct svm_regctx *regs, int rep) } static void -svm_inout_str_seginfo(struct svm_softc *svm_sc, int vcpu, int64_t info1, - int in, struct vm_inout_str *vis) +svm_inout_str_seginfo(struct svm_softc *svm_sc, struct svm_vcpu *vcpu, + int64_t info1, int in, struct vm_inout_str *vis) { int error __diagused, s; @@ -774,7 +778,8 @@ svm_paging_info(struct vmcb *vmcb, struct vm_guest_paging *paging) * Handle guest I/O intercept. */ static int -svm_handle_io(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) +svm_handle_io(struct svm_softc *svm_sc, struct svm_vcpu *vcpu, + struct vm_exit *vmexit) { struct vmcb_ctrl *ctrl; struct vmcb_state *state; @@ -783,9 +788,9 @@ svm_handle_io(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) uint64_t info1; int inout_string; - state = svm_get_vmcb_state(svm_sc, vcpu); - ctrl = svm_get_vmcb_ctrl(svm_sc, vcpu); - regs = svm_get_guest_regctx(svm_sc, vcpu); + state = svm_get_vmcb_state(vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); + regs = svm_get_guest_regctx(vcpu); info1 = ctrl->exitinfo1; inout_string = info1 & BIT(2) ? 1 : 0; @@ -811,7 +816,7 @@ svm_handle_io(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) if (inout_string) { vmexit->exitcode = VM_EXITCODE_INOUT_STR; vis = &vmexit->u.inout_str; - svm_paging_info(svm_get_vmcb(svm_sc, vcpu), &vis->paging); + svm_paging_info(svm_get_vmcb(vcpu), &vis->paging); vis->rflags = state->rflags; vis->cr0 = state->cr0; vis->index = svm_inout_str_index(regs, vmexit->u.inout.in); @@ -932,12 +937,12 @@ intrtype_to_str(int intr_type) * Inject an event to vcpu as described in section 15.20, "Event injection". */ static void -svm_eventinject(struct svm_softc *sc, int vcpu, int intr_type, int vector, - uint32_t error, bool ec_valid) +svm_eventinject(struct svm_softc *sc, struct svm_vcpu *vcpu, int intr_type, + int vector, uint32_t error, bool ec_valid) { struct vmcb_ctrl *ctrl; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); KASSERT((ctrl->eventinj & VMCB_EVENTINJ_VALID) == 0, ("%s: event already pending %#lx", __func__, ctrl->eventinj)); @@ -962,24 +967,25 @@ svm_eventinject(struct svm_softc *sc, int vcpu, int intr_type, int vector, if (ec_valid) { ctrl->eventinj |= VMCB_EVENTINJ_EC_VALID; ctrl->eventinj |= (uint64_t)error << 32; - VCPU_CTR3(sc->vm, vcpu, "Injecting %s at vector %d errcode %#x", + VCPU_CTR3(sc->vm, vcpu->vcpuid, + "Injecting %s at vector %d errcode %#x", intrtype_to_str(intr_type), vector, error); } else { - VCPU_CTR2(sc->vm, vcpu, "Injecting %s at vector %d", + VCPU_CTR2(sc->vm, vcpu->vcpuid, "Injecting %s at vector %d", intrtype_to_str(intr_type), vector); } } static void -svm_update_virqinfo(struct svm_softc *sc, int vcpu) +svm_update_virqinfo(struct svm_softc *sc, struct svm_vcpu *vcpu) { struct vm *vm; struct vlapic *vlapic; struct vmcb_ctrl *ctrl; vm = sc->vm; - vlapic = vm_lapic(vm, vcpu); - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + vlapic = vm_lapic(vm, vcpu->vcpuid); + ctrl = svm_get_vmcb_ctrl(vcpu); /* Update %cr8 in the emulated vlapic */ vlapic_set_cr8(vlapic, ctrl->v_tpr); @@ -990,12 +996,14 @@ svm_update_virqinfo(struct svm_softc *sc, int vcpu) } static void -svm_save_intinfo(struct svm_softc *svm_sc, int vcpu) +svm_save_intinfo(struct svm_softc *svm_sc, struct svm_vcpu *vcpu) { struct vmcb_ctrl *ctrl; uint64_t intinfo; + int vcpuid; - ctrl = svm_get_vmcb_ctrl(svm_sc, vcpu); + vcpuid = vcpu->vcpuid; + ctrl = svm_get_vmcb_ctrl(vcpu); intinfo = ctrl->exitintinfo; if (!VMCB_EXITINTINFO_VALID(intinfo)) return; @@ -1006,15 +1014,15 @@ svm_save_intinfo(struct svm_softc *svm_sc, int vcpu) * If a #VMEXIT happened during event delivery then record the event * that was being delivered. */ - VCPU_CTR2(svm_sc->vm, vcpu, "SVM:Pending INTINFO(0x%lx), vector=%d.\n", + VCPU_CTR2(svm_sc->vm, vcpuid, "SVM:Pending INTINFO(0x%lx), vector=%d.\n", intinfo, VMCB_EXITINTINFO_VECTOR(intinfo)); - vmm_stat_incr(svm_sc->vm, vcpu, VCPU_EXITINTINFO, 1); - vm_exit_intinfo(svm_sc->vm, vcpu, intinfo); + vmm_stat_incr(svm_sc->vm, vcpuid, VCPU_EXITINTINFO, 1); + vm_exit_intinfo(svm_sc->vm, vcpuid, intinfo); } #ifdef INVARIANTS static __inline int -vintr_intercept_enabled(struct svm_softc *sc, int vcpu) +vintr_intercept_enabled(struct svm_softc *sc, struct svm_vcpu *vcpu) { return (svm_get_intercept(sc, vcpu, VMCB_CTRL1_INTCPT, @@ -1023,11 +1031,11 @@ vintr_intercept_enabled(struct svm_softc *sc, int vcpu) #endif static __inline void -enable_intr_window_exiting(struct svm_softc *sc, int vcpu) +enable_intr_window_exiting(struct svm_softc *sc, struct svm_vcpu *vcpu) { struct vmcb_ctrl *ctrl; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); if (ctrl->v_irq && ctrl->v_intr_vector == 0) { KASSERT(ctrl->v_ign_tpr, ("%s: invalid v_ign_tpr", __func__)); @@ -1036,20 +1044,20 @@ enable_intr_window_exiting(struct svm_softc *sc, int vcpu) return; } - VCPU_CTR0(sc->vm, vcpu, "Enable intr window exiting"); + VCPU_CTR0(sc->vm, vcpu->vcpuid, "Enable intr window exiting"); ctrl->v_irq = 1; ctrl->v_ign_tpr = 1; ctrl->v_intr_vector = 0; - svm_set_dirty(sc, vcpu, VMCB_CACHE_TPR); + svm_set_dirty(vcpu, VMCB_CACHE_TPR); svm_enable_intercept(sc, vcpu, VMCB_CTRL1_INTCPT, VMCB_INTCPT_VINTR); } static __inline void -disable_intr_window_exiting(struct svm_softc *sc, int vcpu) +disable_intr_window_exiting(struct svm_softc *sc, struct svm_vcpu *vcpu) { struct vmcb_ctrl *ctrl; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); if (!ctrl->v_irq && ctrl->v_intr_vector == 0) { KASSERT(!vintr_intercept_enabled(sc, vcpu), @@ -1057,35 +1065,36 @@ disable_intr_window_exiting(struct svm_softc *sc, int vcpu) return; } - VCPU_CTR0(sc->vm, vcpu, "Disable intr window exiting"); + VCPU_CTR0(sc->vm, vcpu->vcpuid, "Disable intr window exiting"); ctrl->v_irq = 0; ctrl->v_intr_vector = 0; - svm_set_dirty(sc, vcpu, VMCB_CACHE_TPR); + svm_set_dirty(vcpu, VMCB_CACHE_TPR); svm_disable_intercept(sc, vcpu, VMCB_CTRL1_INTCPT, VMCB_INTCPT_VINTR); } static int -svm_modify_intr_shadow(struct svm_softc *sc, int vcpu, uint64_t val) +svm_modify_intr_shadow(struct svm_softc *sc, struct svm_vcpu *vcpu, + uint64_t val) { struct vmcb_ctrl *ctrl; int oldval, newval; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); oldval = ctrl->intr_shadow; newval = val ? 1 : 0; if (newval != oldval) { ctrl->intr_shadow = newval; - VCPU_CTR1(sc->vm, vcpu, "Setting intr_shadow to %d", newval); + VCPU_CTR1(sc->vm, vcpu->vcpuid, "Setting intr_shadow to %d", newval); } return (0); } static int -svm_get_intr_shadow(struct svm_softc *sc, int vcpu, uint64_t *val) +svm_get_intr_shadow(struct svm_softc *sc, struct svm_vcpu *vcpu, uint64_t *val) { struct vmcb_ctrl *ctrl; - ctrl = svm_get_vmcb_ctrl(sc, vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); *val = ctrl->intr_shadow; return (0); } @@ -1096,7 +1105,7 @@ svm_get_intr_shadow(struct svm_softc *sc, int vcpu, uint64_t *val) * to track when the vcpu is done handling the NMI. */ static int -nmi_blocked(struct svm_softc *sc, int vcpu) +nmi_blocked(struct svm_softc *sc, struct svm_vcpu *vcpu) { int blocked; @@ -1106,21 +1115,21 @@ nmi_blocked(struct svm_softc *sc, int vcpu) } static void -enable_nmi_blocking(struct svm_softc *sc, int vcpu) +enable_nmi_blocking(struct svm_softc *sc, struct svm_vcpu *vcpu) { KASSERT(!nmi_blocked(sc, vcpu), ("vNMI already blocked")); - VCPU_CTR0(sc->vm, vcpu, "vNMI blocking enabled"); + VCPU_CTR0(sc->vm, vcpu->vcpuid, "vNMI blocking enabled"); svm_enable_intercept(sc, vcpu, VMCB_CTRL1_INTCPT, VMCB_INTCPT_IRET); } static void -clear_nmi_blocking(struct svm_softc *sc, int vcpu) +clear_nmi_blocking(struct svm_softc *sc, struct svm_vcpu *vcpu) { int error __diagused; KASSERT(nmi_blocked(sc, vcpu), ("vNMI already unblocked")); - VCPU_CTR0(sc->vm, vcpu, "vNMI blocking cleared"); + VCPU_CTR0(sc->vm, vcpu->vcpuid, "vNMI blocking cleared"); /* * When the IRET intercept is cleared the vcpu will attempt to execute * the "iret" when it runs next. However, it is possible to inject @@ -1145,17 +1154,19 @@ clear_nmi_blocking(struct svm_softc *sc, int vcpu) #define EFER_MBZ_BITS 0xFFFFFFFFFFFF0200UL static int -svm_write_efer(struct svm_softc *sc, int vcpu, uint64_t newval, bool *retu) +svm_write_efer(struct svm_softc *sc, struct svm_vcpu *vcpu, uint64_t newval, + bool *retu) { struct vm_exit *vme; struct vmcb_state *state; uint64_t changed, lma, oldval; - int error __diagused; + int error __diagused, vcpuid; - state = svm_get_vmcb_state(sc, vcpu); + state = svm_get_vmcb_state(vcpu); + vcpuid = vcpu->vcpuid; oldval = state->efer; - VCPU_CTR2(sc->vm, vcpu, "wrmsr(efer) %#lx/%#lx", oldval, newval); + VCPU_CTR2(sc->vm, vcpuid, "wrmsr(efer) %#lx/%#lx", oldval, newval); newval &= ~0xFE; /* clear the Read-As-Zero (RAZ) bits */ changed = oldval ^ newval; @@ -1179,7 +1190,7 @@ svm_write_efer(struct svm_softc *sc, int vcpu, uint64_t newval, bool *retu) goto gpf; if (newval & EFER_NXE) { - if (!vm_cpuid_capability(sc->vm, vcpu, VCC_NO_EXECUTE)) + if (!vm_cpuid_capability(sc->vm, vcpuid, VCC_NO_EXECUTE)) goto gpf; } @@ -1188,19 +1199,19 @@ svm_write_efer(struct svm_softc *sc, int vcpu, uint64_t newval, bool *retu) * this is fixed flag guest attempt to set EFER_LMSLE as an error. */ if (newval & EFER_LMSLE) { - vme = vm_exitinfo(sc->vm, vcpu); + vme = vm_exitinfo(sc->vm, vcpuid); vm_exit_svm(vme, VMCB_EXIT_MSR, 1, 0); *retu = true; return (0); } if (newval & EFER_FFXSR) { - if (!vm_cpuid_capability(sc->vm, vcpu, VCC_FFXSR)) + if (!vm_cpuid_capability(sc->vm, vcpuid, VCC_FFXSR)) goto gpf; } if (newval & EFER_TCE) { - if (!vm_cpuid_capability(sc->vm, vcpu, VCC_TCE)) + if (!vm_cpuid_capability(sc->vm, vcpuid, VCC_TCE)) goto gpf; } @@ -1208,18 +1219,18 @@ svm_write_efer(struct svm_softc *sc, int vcpu, uint64_t newval, bool *retu) KASSERT(error == 0, ("%s: error %d updating efer", __func__, error)); return (0); gpf: - vm_inject_gp(sc->vm, vcpu); + vm_inject_gp(sc->vm, vcpuid); return (0); } static int -emulate_wrmsr(struct svm_softc *sc, int vcpu, u_int num, uint64_t val, - bool *retu) +emulate_wrmsr(struct svm_softc *sc, struct svm_vcpu *vcpu, u_int num, + uint64_t val, bool *retu) { int error; if (lapic_msr(num)) - error = lapic_wrmsr(sc->vm, vcpu, num, val, retu); + error = lapic_wrmsr(sc->vm, vcpu->vcpuid, num, val, retu); else if (num == MSR_EFER) error = svm_write_efer(sc, vcpu, val, retu); else @@ -1229,7 +1240,8 @@ emulate_wrmsr(struct svm_softc *sc, int vcpu, u_int num, uint64_t val, } static int -emulate_rdmsr(struct svm_softc *sc, int vcpu, u_int num, bool *retu) +emulate_rdmsr(struct svm_softc *sc, struct svm_vcpu *vcpu, u_int num, + bool *retu) { struct vmcb_state *state; struct svm_regctx *ctx; @@ -1237,13 +1249,13 @@ emulate_rdmsr(struct svm_softc *sc, int vcpu, u_int num, bool *retu) int error; if (lapic_msr(num)) - error = lapic_rdmsr(sc->vm, vcpu, num, &result, retu); + error = lapic_rdmsr(sc->vm, vcpu->vcpuid, num, &result, retu); else error = svm_rdmsr(sc, vcpu, num, &result, retu); if (error == 0) { - state = svm_get_vmcb_state(sc, vcpu); - ctx = svm_get_guest_regctx(sc, vcpu); + state = svm_get_vmcb_state(vcpu); + ctx = svm_get_guest_regctx(vcpu); state->rax = result & 0xffffffff; ctx->sctx_rdx = result >> 32; } @@ -1324,7 +1336,8 @@ nrip_valid(uint64_t exitcode) } static int -svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) +svm_vmexit(struct svm_softc *svm_sc, struct svm_vcpu *vcpu, + struct vm_exit *vmexit) { struct vmcb *vmcb; struct vmcb_state *state; @@ -1333,12 +1346,14 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) uint64_t code, info1, info2, val; uint32_t eax, ecx, edx; int error __diagused, errcode_valid, handled, idtvec, reflect; + int vcpuid; bool retu; - ctx = svm_get_guest_regctx(svm_sc, vcpu); - vmcb = svm_get_vmcb(svm_sc, vcpu); + ctx = svm_get_guest_regctx(vcpu); + vmcb = svm_get_vmcb(vcpu); state = &vmcb->state; ctrl = &vmcb->ctrl; + vcpuid = vcpu->vcpuid; handled = 0; code = ctrl->exitcode; @@ -1349,7 +1364,7 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) vmexit->rip = state->rip; vmexit->inst_length = nrip_valid(code) ? ctrl->nrip - state->rip : 0; - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_COUNT, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_COUNT, 1); /* * #VMEXIT(INVALID) needs to be handled early because the VMCB is @@ -1381,18 +1396,18 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) handled = 1; break; case VMCB_EXIT_VINTR: /* interrupt window exiting */ - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_VINTR, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_VINTR, 1); handled = 1; break; case VMCB_EXIT_INTR: /* external interrupt */ - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_EXTINT, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_EXTINT, 1); handled = 1; break; case VMCB_EXIT_NMI: /* external NMI */ handled = 1; break; case 0x40 ... 0x5F: - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_EXCEPTION, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_EXCEPTION, 1); reflect = 1; idtvec = code - 0x40; switch (idtvec) { @@ -1402,7 +1417,7 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) * reflect the machine check back into the guest. */ reflect = 0; - VCPU_CTR0(svm_sc->vm, vcpu, "Vectoring to MCE handler"); + VCPU_CTR0(svm_sc->vm, vcpuid, "Vectoring to MCE handler"); __asm __volatile("int $18"); break; case IDT_PF: @@ -1436,7 +1451,7 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) * event injection is identical to what it was when * the exception originally happened. */ - VCPU_CTR2(svm_sc->vm, vcpu, "Reset inst_length from %d " + VCPU_CTR2(svm_sc->vm, vcpuid, "Reset inst_length from %d " "to zero before injecting exception %d", vmexit->inst_length, idtvec); vmexit->inst_length = 0; @@ -1452,9 +1467,9 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) if (reflect) { /* Reflect the exception back into the guest */ - VCPU_CTR2(svm_sc->vm, vcpu, "Reflecting exception " + VCPU_CTR2(svm_sc->vm, vcpuid, "Reflecting exception " "%d/%#x into the guest", idtvec, (int)info1); - error = vm_inject_exception(svm_sc->vm, vcpu, idtvec, + error = vm_inject_exception(svm_sc->vm, vcpuid, idtvec, errcode_valid, info1, 0); KASSERT(error == 0, ("%s: vm_inject_exception error %d", __func__, error)); @@ -1468,9 +1483,9 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) retu = false; if (info1) { - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_WRMSR, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_WRMSR, 1); val = (uint64_t)edx << 32 | eax; - VCPU_CTR2(svm_sc->vm, vcpu, "wrmsr %#x val %#lx", + VCPU_CTR2(svm_sc->vm, vcpuid, "wrmsr %#x val %#lx", ecx, val); if (emulate_wrmsr(svm_sc, vcpu, ecx, val, &retu)) { vmexit->exitcode = VM_EXITCODE_WRMSR; @@ -1483,8 +1498,8 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) ("emulate_wrmsr retu with bogus exitcode")); } } else { - VCPU_CTR1(svm_sc->vm, vcpu, "rdmsr %#x", ecx); - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_RDMSR, 1); + VCPU_CTR1(svm_sc->vm, vcpuid, "rdmsr %#x", ecx); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_RDMSR, 1); if (emulate_rdmsr(svm_sc, vcpu, ecx, &retu)) { vmexit->exitcode = VM_EXITCODE_RDMSR; vmexit->u.msr.code = ecx; @@ -1498,40 +1513,40 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) break; case VMCB_EXIT_IO: handled = svm_handle_io(svm_sc, vcpu, vmexit); - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_INOUT, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_INOUT, 1); break; case VMCB_EXIT_CPUID: - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_CPUID, 1); - handled = x86_emulate_cpuid(svm_sc->vm, vcpu, &state->rax, + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_CPUID, 1); + handled = x86_emulate_cpuid(svm_sc->vm, vcpuid, &state->rax, &ctx->sctx_rbx, &ctx->sctx_rcx, &ctx->sctx_rdx); break; case VMCB_EXIT_HLT: - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_HLT, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_HLT, 1); vmexit->exitcode = VM_EXITCODE_HLT; vmexit->u.hlt.rflags = state->rflags; break; case VMCB_EXIT_PAUSE: vmexit->exitcode = VM_EXITCODE_PAUSE; - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_PAUSE, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_PAUSE, 1); break; case VMCB_EXIT_NPF: /* EXITINFO2 contains the faulting guest physical address */ if (info1 & VMCB_NPF_INFO1_RSV) { - VCPU_CTR2(svm_sc->vm, vcpu, "nested page fault with " + VCPU_CTR2(svm_sc->vm, vcpuid, "nested page fault with " "reserved bits set: info1(%#lx) info2(%#lx)", info1, info2); - } else if (vm_mem_allocated(svm_sc->vm, vcpu, info2)) { + } else if (vm_mem_allocated(svm_sc->vm, vcpuid, info2)) { vmexit->exitcode = VM_EXITCODE_PAGING; vmexit->u.paging.gpa = info2; vmexit->u.paging.fault_type = npf_fault_type(info1); - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_NESTED_FAULT, 1); - VCPU_CTR3(svm_sc->vm, vcpu, "nested page fault " + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_NESTED_FAULT, 1); + VCPU_CTR3(svm_sc->vm, vcpuid, "nested page fault " "on gpa %#lx/%#lx at rip %#lx", info2, info1, state->rip); } else if (svm_npf_emul_fault(info1)) { svm_handle_inst_emul(vmcb, info2, vmexit); - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_INST_EMUL, 1); - VCPU_CTR3(svm_sc->vm, vcpu, "inst_emul fault " + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_INST_EMUL, 1); + VCPU_CTR3(svm_sc->vm, vcpuid, "inst_emul fault " "for gpa %#lx/%#lx at rip %#lx", info2, info1, state->rip); } @@ -1552,7 +1567,7 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) case VMCB_EXIT_SKINIT: case VMCB_EXIT_ICEBP: case VMCB_EXIT_INVLPGA: - vm_inject_ud(svm_sc->vm, vcpu); + vm_inject_ud(svm_sc->vm, vcpuid); handled = 1; break; case VMCB_EXIT_INVD: @@ -1561,11 +1576,11 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) handled = 1; break; default: - vmm_stat_incr(svm_sc->vm, vcpu, VMEXIT_UNKNOWN, 1); + vmm_stat_incr(svm_sc->vm, vcpuid, VMEXIT_UNKNOWN, 1); break; } - VCPU_CTR4(svm_sc->vm, vcpu, "%s %s vmexit at %#lx/%d", + VCPU_CTR4(svm_sc->vm, vcpuid, "%s %s vmexit at %#lx/%d", handled ? "handled" : "unhandled", exit_reason_to_str(code), vmexit->rip, vmexit->inst_length); @@ -1591,11 +1606,12 @@ svm_vmexit(struct svm_softc *svm_sc, int vcpu, struct vm_exit *vmexit) } static void -svm_inj_intinfo(struct svm_softc *svm_sc, int vcpu) +svm_inj_intinfo(struct svm_softc *svm_sc, struct svm_vcpu *vcpu) { uint64_t intinfo; + int vcpuid = vcpu->vcpuid; - if (!vm_entry_intinfo(svm_sc->vm, vcpu, &intinfo)) + if (!vm_entry_intinfo(svm_sc->vm, vcpuid, &intinfo)) return; KASSERT(VMCB_EXITINTINFO_VALID(intinfo), ("%s: entry intinfo is not " @@ -1605,34 +1621,34 @@ svm_inj_intinfo(struct svm_softc *svm_sc, int vcpu) VMCB_EXITINTINFO_VECTOR(intinfo), VMCB_EXITINTINFO_EC(intinfo), VMCB_EXITINTINFO_EC_VALID(intinfo)); - vmm_stat_incr(svm_sc->vm, vcpu, VCPU_INTINFO_INJECTED, 1); - VCPU_CTR1(svm_sc->vm, vcpu, "Injected entry intinfo: %#lx", intinfo); + vmm_stat_incr(svm_sc->vm, vcpuid, VCPU_INTINFO_INJECTED, 1); + VCPU_CTR1(svm_sc->vm, vcpuid, "Injected entry intinfo: %#lx", intinfo); } /* * Inject event to virtual cpu. */ static void -svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) +svm_inj_interrupts(struct svm_softc *sc, struct svm_vcpu *vcpu, + struct vlapic *vlapic) { struct vmcb_ctrl *ctrl; struct vmcb_state *state; - struct svm_vcpu *vcpustate; uint8_t v_tpr; int vector, need_intr_window; int extint_pending; + int vcpuid = vcpu->vcpuid; - state = svm_get_vmcb_state(sc, vcpu); - ctrl = svm_get_vmcb_ctrl(sc, vcpu); - vcpustate = svm_get_vcpu(sc, vcpu); + state = svm_get_vmcb_state(vcpu); + ctrl = svm_get_vmcb_ctrl(vcpu); need_intr_window = 0; - if (vcpustate->nextrip != state->rip) { + if (vcpu->nextrip != state->rip) { ctrl->intr_shadow = 0; - VCPU_CTR2(sc->vm, vcpu, "Guest interrupt blocking " + VCPU_CTR2(sc->vm, vcpuid, "Guest interrupt blocking " "cleared due to rip change: %#lx/%#lx", - vcpustate->nextrip, state->rip); + vcpu->nextrip, state->rip); } /* @@ -1647,19 +1663,19 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) svm_inj_intinfo(sc, vcpu); /* NMI event has priority over interrupts. */ - if (vm_nmi_pending(sc->vm, vcpu)) { + if (vm_nmi_pending(sc->vm, vcpuid)) { if (nmi_blocked(sc, vcpu)) { /* * Can't inject another NMI if the guest has not * yet executed an "iret" after the last NMI. */ - VCPU_CTR0(sc->vm, vcpu, "Cannot inject NMI due " + VCPU_CTR0(sc->vm, vcpuid, "Cannot inject NMI due " "to NMI-blocking"); } else if (ctrl->intr_shadow) { /* * Can't inject an NMI if the vcpu is in an intr_shadow. */ - VCPU_CTR0(sc->vm, vcpu, "Cannot inject NMI due to " + VCPU_CTR0(sc->vm, vcpuid, "Cannot inject NMI due to " "interrupt shadow"); need_intr_window = 1; goto done; @@ -1668,7 +1684,7 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) * If there is already an exception/interrupt pending * then defer the NMI until after that. */ - VCPU_CTR1(sc->vm, vcpu, "Cannot inject NMI due to " + VCPU_CTR1(sc->vm, vcpuid, "Cannot inject NMI due to " "eventinj %#lx", ctrl->eventinj); /* @@ -1683,7 +1699,7 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) */ ipi_cpu(curcpu, IPI_AST); /* XXX vmm_ipinum? */ } else { - vm_nmi_clear(sc->vm, vcpu); + vm_nmi_clear(sc->vm, vcpuid); /* Inject NMI, vector number is not used */ svm_eventinject(sc, vcpu, VMCB_EVENTINJ_TYPE_NMI, @@ -1692,11 +1708,11 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) /* virtual NMI blocking is now in effect */ enable_nmi_blocking(sc, vcpu); - VCPU_CTR0(sc->vm, vcpu, "Injecting vNMI"); + VCPU_CTR0(sc->vm, vcpuid, "Injecting vNMI"); } } - extint_pending = vm_extint_pending(sc->vm, vcpu); + extint_pending = vm_extint_pending(sc->vm, vcpuid); if (!extint_pending) { if (!vlapic_pending_intr(vlapic, &vector)) goto done; @@ -1714,21 +1730,21 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) * then we cannot inject the pending interrupt. */ if ((state->rflags & PSL_I) == 0) { - VCPU_CTR2(sc->vm, vcpu, "Cannot inject vector %d due to " + VCPU_CTR2(sc->vm, vcpuid, "Cannot inject vector %d due to " "rflags %#lx", vector, state->rflags); need_intr_window = 1; goto done; } if (ctrl->intr_shadow) { - VCPU_CTR1(sc->vm, vcpu, "Cannot inject vector %d due to " + VCPU_CTR1(sc->vm, vcpuid, "Cannot inject vector %d due to " "interrupt shadow", vector); need_intr_window = 1; goto done; } if (ctrl->eventinj & VMCB_EVENTINJ_VALID) { - VCPU_CTR2(sc->vm, vcpu, "Cannot inject vector %d due to " + VCPU_CTR2(sc->vm, vcpuid, "Cannot inject vector %d due to " "eventinj %#lx", vector, ctrl->eventinj); need_intr_window = 1; goto done; @@ -1739,7 +1755,7 @@ svm_inj_interrupts(struct svm_softc *sc, int vcpu, struct vlapic *vlapic) if (!extint_pending) { vlapic_intr_accepted(vlapic, vector); } else { - vm_extint_clear(sc->vm, vcpu); + vm_extint_clear(sc->vm, vcpuid); vatpic_intr_accepted(sc->vm, vector); } @@ -1765,10 +1781,10 @@ done: v_tpr = vlapic_get_cr8(vlapic); KASSERT(v_tpr <= 15, ("invalid v_tpr %#x", v_tpr)); if (ctrl->v_tpr != v_tpr) { - VCPU_CTR2(sc->vm, vcpu, "VMCB V_TPR changed from %#x to %#x", + VCPU_CTR2(sc->vm, vcpuid, "VMCB V_TPR changed from %#x to %#x", *** 3579 LINES SKIPPED ***