From nobody Fri Dec 31 04:13:57 2021 X-Original-To: dev-commits-src-main@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 51B2A191C6F7; Fri, 31 Dec 2021 04:13:58 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4JQBbT6LpLz3ntJ; Fri, 31 Dec 2021 04:13:57 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id B9DAC250A0; Fri, 31 Dec 2021 04:13:57 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 1BV4Dvcn026541; Fri, 31 Dec 2021 04:13:57 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 1BV4DvvQ026540; Fri, 31 Dec 2021 04:13:57 GMT (envelope-from git) Date: Fri, 31 Dec 2021 04:13:57 GMT Message-Id: <202112310413.1BV4DvvQ026540@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org From: Doug Moore Subject: git: c606ab59e7f9 - main - vm_extern: use standard address checkers everywhere List-Id: Commit messages for the main branch of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-main List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-main@freebsd.org X-BeenThere: dev-commits-src-main@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: dougm X-Git-Repository: src X-Git-Refname: refs/heads/main X-Git-Reftype: branch X-Git-Commit: c606ab59e7f9423f7027320e9a4514c7db39658d Auto-Submitted: auto-generated ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1640924037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=qoUdc2H7Ysujn6mFzWqbIy1mqOh2wTmKiinMQporx7g=; b=wC+ofR9QKmFxpkeZ7modyJwKdOiJNawEKHV0ZRGMFaZB4d0t7kH2kWBMpAVVBEd8a+lpnj wmJvT7v0J87ozqnfO11YPPLiJdwcD9wAjSDDiEWsxJ+IedtlUv6dIGZB5Fcb/Bz1epdAMK 4LNNViixYnmUSsuzMR03AAuIOMxD6NaDODnc6M83oim3g4bhv3P9E4Jpjp4tQbxgT/n0U6 q8E6qLrjykh13AP0wNDsZRpoqp66sF5a6Ni3r7AXKa0vBUKTVNyx55wZwzBj50YdmSWxLk 59bYlnNlsH1S7llwRHK43UXuWJSSYryRqsibPwgi0i430UD1h8rglSJGj2MXfg== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1640924037; a=rsa-sha256; cv=none; b=H3rCm4PVcQuIEmeH+96SKPB886Arg+WKgHR9kq/sX8RG1T2qabAi11dvvb7MgBzFcWMLxB WzR2PAD/FVXiFTAVQKMKBJdTohnsZX+ncoBVWeE+ogeGGl6EQwVFjXqjeDhI6ixEJKHLyV OMdaAHhuU3ByTGxSjtz+O5GjWwL3jS92KNObrZwkhPvUF3Vgq7a4G/1kG2kb7s0wQvuiIA SyiU+77YFPdHn8n/ctskMqx1uTGCDL1WS+OMwHw4YiLGilOag/b2yUxuNDp1Gj3VkB8zeE rrKTSmYauRmimmC1nXprbpkk7l2yblMnRNWY8knCaEbQ2Mq/Nsq7eJzIFXHqyg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none X-ThisMailContainsUnwantedMimeParts: N The branch main has been updated by dougm: URL: https://cgit.FreeBSD.org/src/commit/?id=c606ab59e7f9423f7027320e9a4514c7db39658d commit c606ab59e7f9423f7027320e9a4514c7db39658d Author: Doug Moore AuthorDate: 2021-12-31 04:09:08 +0000 Commit: Doug Moore CommitDate: 2021-12-31 04:09:08 +0000 vm_extern: use standard address checkers everywhere Define simple functions for alignment and boundary checks and use them everywhere instead of having slightly different implementations scattered about. Define them in vm_extern.h and use them where possible where vm_extern.h is included. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D33685 --- sys/arm/arm/busdma_machdep.c | 15 +- sys/arm64/arm64/busdma_bounce.c | 17 +- sys/dev/iommu/busdma_iommu.c | 6 +- sys/dev/iommu/iommu.h | 10 - sys/dev/iommu/iommu_gas.c | 2 +- sys/mips/mips/busdma_machdep.c | 1545 ++++++++++++++++++++++++++++++++++ sys/powerpc/powerpc/busdma_machdep.c | 17 +- sys/riscv/riscv/busdma_bounce.c | 15 +- sys/riscv/riscv/busdma_machdep.c | 2 +- sys/vm/vm_extern.h | 27 + sys/vm/vm_map.c | 6 +- sys/vm/vm_page.c | 5 +- sys/vm/vm_phys.c | 4 +- sys/vm/vm_reserv.c | 21 +- sys/x86/x86/busdma_bounce.c | 23 +- sys/x86/x86/busdma_machdep.c | 2 +- 16 files changed, 1624 insertions(+), 93 deletions(-) diff --git a/sys/arm/arm/busdma_machdep.c b/sys/arm/arm/busdma_machdep.c index 11c306526e0d..55e02bc122a1 100644 --- a/sys/arm/arm/busdma_machdep.c +++ b/sys/arm/arm/busdma_machdep.c @@ -318,7 +318,7 @@ static __inline int alignment_bounce(bus_dma_tag_t dmat, bus_addr_t addr) { - return (addr & (dmat->alignment - 1)); + return (!vm_addr_align_ok(addr, dmat->alignment)); } /* @@ -1007,18 +1007,13 @@ static int _bus_dmamap_addseg(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t curaddr, bus_size_t sgsize, bus_dma_segment_t *segs, int *segp) { - bus_addr_t baddr, bmask; int seg; /* * Make sure we don't cross any boundaries. */ - bmask = ~(dmat->boundary - 1); - if (dmat->boundary > 0) { - baddr = (curaddr + dmat->boundary) & bmask; - if (sgsize > (baddr - curaddr)) - sgsize = (baddr - curaddr); - } + if (!vm_addr_bound_ok(curaddr, sgsize, dmat->boundary)) + sgsize = roundup2(curaddr, dmat->boundary) - curaddr; /* * Insert chunk into a segment, coalescing with @@ -1032,8 +1027,8 @@ _bus_dmamap_addseg(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t curaddr, } else { if (curaddr == segs[seg].ds_addr + segs[seg].ds_len && (segs[seg].ds_len + sgsize) <= dmat->maxsegsz && - (dmat->boundary == 0 || - (segs[seg].ds_addr & bmask) == (curaddr & bmask))) + vm_addr_bound_ok(segs[seg].ds_addr, segs[seg].ds_len, + dmat->boundary)) segs[seg].ds_len += sgsize; else { if (++seg >= dmat->nsegments) diff --git a/sys/arm64/arm64/busdma_bounce.c b/sys/arm64/arm64/busdma_bounce.c index ef2608d2dadc..0597b6ae8187 100644 --- a/sys/arm64/arm64/busdma_bounce.c +++ b/sys/arm64/arm64/busdma_bounce.c @@ -197,7 +197,7 @@ static bool alignment_bounce(bus_dma_tag_t dmat, bus_addr_t addr) { - return ((addr & (dmat->common.alignment - 1)) != 0); + return (!vm_addr_align_ok(addr, dmat->common.alignment)); } static bool @@ -616,7 +616,7 @@ bounce_bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags, __func__, dmat, dmat->common.flags, ENOMEM); free(*mapp, M_DEVBUF); return (ENOMEM); - } else if (vtophys(*vaddr) & (dmat->alloc_alignment - 1)) { + } else if (!vm_addr_align_ok(vtophys(*vaddr), dmat->alloc_alignment)) { printf("bus_dmamem_alloc failed to align memory properly.\n"); } dmat->map_count++; @@ -767,18 +767,13 @@ static bus_size_t _bus_dmamap_addseg(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t curaddr, bus_size_t sgsize, bus_dma_segment_t *segs, int *segp) { - bus_addr_t baddr, bmask; int seg; /* * Make sure we don't cross any boundaries. */ - bmask = ~(dmat->common.boundary - 1); - if (dmat->common.boundary > 0) { - baddr = (curaddr + dmat->common.boundary) & bmask; - if (sgsize > (baddr - curaddr)) - sgsize = (baddr - curaddr); - } + if (!vm_addr_bound_ok(curaddr, sgsize, dmat->common.boundary)) + sgsize = roundup2(curaddr, dmat->common.boundary) - curaddr; /* * Insert chunk into a segment, coalescing with @@ -792,8 +787,8 @@ _bus_dmamap_addseg(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t curaddr, } else { if (curaddr == segs[seg].ds_addr + segs[seg].ds_len && (segs[seg].ds_len + sgsize) <= dmat->common.maxsegsz && - (dmat->common.boundary == 0 || - (segs[seg].ds_addr & bmask) == (curaddr & bmask))) + vm_addr_bound_ok(segs[seg].ds_addr, segs[seg].ds_len, + dmat->common.boundary)) segs[seg].ds_len += sgsize; else { if (++seg >= dmat->common.nsegments) diff --git a/sys/dev/iommu/busdma_iommu.c b/sys/dev/iommu/busdma_iommu.c index d32beee19be7..99d47f0b6ede 100644 --- a/sys/dev/iommu/busdma_iommu.c +++ b/sys/dev/iommu/busdma_iommu.c @@ -619,8 +619,8 @@ iommu_bus_dmamap_load_something1(struct bus_dma_tag_iommu *tag, if (buflen1 > tag->common.maxsegsz) buflen1 = tag->common.maxsegsz; - KASSERT(((entry->start + offset) & (tag->common.alignment - 1)) - == 0, + KASSERT(vm_addr_align_ok(entry->start + offset, + tag->common.alignment), ("alignment failed: ctx %p start 0x%jx offset %x " "align 0x%jx", ctx, (uintmax_t)entry->start, offset, (uintmax_t)tag->common.alignment)); @@ -631,7 +631,7 @@ iommu_bus_dmamap_load_something1(struct bus_dma_tag_iommu *tag, (uintmax_t)entry->start, (uintmax_t)entry->end, (uintmax_t)tag->common.lowaddr, (uintmax_t)tag->common.highaddr)); - KASSERT(iommu_test_boundary(entry->start + offset, buflen1, + KASSERT(vm_addr_bound_ok(entry->start + offset, buflen1, tag->common.boundary), ("boundary failed: ctx %p start 0x%jx end 0x%jx " "boundary 0x%jx", ctx, (uintmax_t)entry->start, diff --git a/sys/dev/iommu/iommu.h b/sys/dev/iommu/iommu.h index dd803e84c2ee..ee1149e6ea8f 100644 --- a/sys/dev/iommu/iommu.h +++ b/sys/dev/iommu/iommu.h @@ -148,16 +148,6 @@ struct iommu_ctx { #define IOMMU_DOMAIN_UNLOCK(dom) mtx_unlock(&(dom)->lock) #define IOMMU_DOMAIN_ASSERT_LOCKED(dom) mtx_assert(&(dom)->lock, MA_OWNED) -static inline bool -iommu_test_boundary(iommu_gaddr_t start, iommu_gaddr_t size, - iommu_gaddr_t boundary) -{ - - if (boundary == 0) - return (true); - return (start + size <= ((start + boundary) & ~(boundary - 1))); -} - void iommu_free_ctx(struct iommu_ctx *ctx); void iommu_free_ctx_locked(struct iommu_unit *iommu, struct iommu_ctx *ctx); struct iommu_ctx *iommu_get_ctx(struct iommu_unit *, device_t dev, diff --git a/sys/dev/iommu/iommu_gas.c b/sys/dev/iommu/iommu_gas.c index c4faebec9d08..a38835566fba 100644 --- a/sys/dev/iommu/iommu_gas.c +++ b/sys/dev/iommu/iommu_gas.c @@ -314,7 +314,7 @@ iommu_gas_match_one(struct iommu_gas_match_args *a, iommu_gaddr_t beg, return (false); /* No boundary crossing. */ - if (iommu_test_boundary(a->entry->start + a->offset, a->size, + if (vm_addr_bound_ok(a->entry->start + a->offset, a->size, a->common->boundary)) return (true); diff --git a/sys/mips/mips/busdma_machdep.c b/sys/mips/mips/busdma_machdep.c new file mode 100644 index 000000000000..348c1d98c328 --- /dev/null +++ b/sys/mips/mips/busdma_machdep.c @@ -0,0 +1,1545 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2006 Oleksandr Tymoshenko + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification, immediately at the beginning of the file. + * 2. The name of the author may not be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR + * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * From i386/busdma_machdep.c,v 1.26 2002/04/19 22:58:09 alfred + */ + +#include +__FBSDID("$FreeBSD$"); + +/* + * MIPS bus dma support routines + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define MAX_BPAGES 64 +#define BUS_DMA_COULD_BOUNCE BUS_DMA_BUS3 +#define BUS_DMA_MIN_ALLOC_COMP BUS_DMA_BUS4 + +/* + * On XBurst cores from Ingenic, cache-line writeback is local + * only, unless accompanied by invalidation. Invalidations force + * dirty line writeout and invalidation requests forwarded to + * other cores if other cores have the cache line dirty. + */ +#if defined(SMP) && defined(CPU_XBURST) +#define BUS_DMA_FORCE_WBINV +#endif + +struct bounce_zone; + +struct bus_dma_tag { + bus_dma_tag_t parent; + bus_size_t alignment; + bus_addr_t boundary; + bus_addr_t lowaddr; + bus_addr_t highaddr; + bus_dma_filter_t *filter; + void *filterarg; + bus_size_t maxsize; + u_int nsegments; + bus_size_t maxsegsz; + int flags; + int ref_count; + int map_count; + bus_dma_lock_t *lockfunc; + void *lockfuncarg; + bus_dma_segment_t *segments; + struct bounce_zone *bounce_zone; +}; + +struct bounce_page { + vm_offset_t vaddr; /* kva of bounce buffer */ + vm_offset_t vaddr_nocache; /* kva of bounce buffer uncached */ + bus_addr_t busaddr; /* Physical address */ + vm_offset_t datavaddr; /* kva of client data */ + bus_addr_t dataaddr; /* client physical address */ + bus_size_t datacount; /* client data count */ + STAILQ_ENTRY(bounce_page) links; +}; + +struct sync_list { + vm_offset_t vaddr; /* kva of bounce buffer */ + bus_addr_t busaddr; /* Physical address */ + bus_size_t datacount; /* client data count */ +}; + +struct bounce_zone { + STAILQ_ENTRY(bounce_zone) links; + STAILQ_HEAD(bp_list, bounce_page) bounce_page_list; + int total_bpages; + int free_bpages; + int reserved_bpages; + int active_bpages; + int total_bounced; + int total_deferred; + int map_count; + bus_size_t alignment; + bus_addr_t lowaddr; + char zoneid[8]; + char lowaddrid[20]; + struct sysctl_ctx_list sysctl_tree; + struct sysctl_oid *sysctl_tree_top; +}; + +static struct mtx bounce_lock; +static int total_bpages; +static int busdma_zonecount; +static STAILQ_HEAD(, bounce_zone) bounce_zone_list; +static void *busdma_ih; + +static SYSCTL_NODE(_hw, OID_AUTO, busdma, CTLFLAG_RD, 0, + "Busdma parameters"); +SYSCTL_INT(_hw_busdma, OID_AUTO, total_bpages, CTLFLAG_RD, &total_bpages, 0, + "Total bounce pages"); + +#define DMAMAP_UNCACHEABLE 0x08 +#define DMAMAP_CACHE_ALIGNED 0x10 + +struct bus_dmamap { + struct bp_list bpages; + int pagesneeded; + int pagesreserved; + bus_dma_tag_t dmat; + struct memdesc mem; + int flags; + TAILQ_ENTRY(bus_dmamap) freelist; + STAILQ_ENTRY(bus_dmamap) links; + bus_dmamap_callback_t *callback; + void *callback_arg; + int sync_count; + struct sync_list *slist; +}; + +static STAILQ_HEAD(, bus_dmamap) bounce_map_waitinglist; +static STAILQ_HEAD(, bus_dmamap) bounce_map_callbacklist; + +static void init_bounce_pages(void *dummy); +static int alloc_bounce_zone(bus_dma_tag_t dmat); +static int alloc_bounce_pages(bus_dma_tag_t dmat, u_int numpages); +static int reserve_bounce_pages(bus_dma_tag_t dmat, bus_dmamap_t map, + int commit); +static bus_addr_t add_bounce_page(bus_dma_tag_t dmat, bus_dmamap_t map, + vm_offset_t vaddr, bus_addr_t addr, + bus_size_t size); +static void free_bounce_page(bus_dma_tag_t dmat, struct bounce_page *bpage); + +/* Default tag, as most drivers provide no parent tag. */ +bus_dma_tag_t mips_root_dma_tag; + +static uma_zone_t dmamap_zone; /* Cache of struct bus_dmamap items */ + +static busdma_bufalloc_t coherent_allocator; /* Cache of coherent buffers */ +static busdma_bufalloc_t standard_allocator; /* Cache of standard buffers */ + +MALLOC_DEFINE(M_BUSDMA, "busdma", "busdma metadata"); +MALLOC_DEFINE(M_BOUNCE, "bounce", "busdma bounce pages"); + +/* + * This is the ctor function passed to uma_zcreate() for the pool of dma maps. + * It'll need platform-specific changes if this code is copied. + */ +static int +dmamap_ctor(void *mem, int size, void *arg, int flags) +{ + bus_dmamap_t map; + bus_dma_tag_t dmat; + + map = (bus_dmamap_t)mem; + dmat = (bus_dma_tag_t)arg; + + dmat->map_count++; + + bzero(map, sizeof(*map)); + map->dmat = dmat; + STAILQ_INIT(&map->bpages); + + return (0); +} + +/* + * This is the dtor function passed to uma_zcreate() for the pool of dma maps. + * It may need platform-specific changes if this code is copied . + */ +static void +dmamap_dtor(void *mem, int size, void *arg) +{ + bus_dmamap_t map; + + map = (bus_dmamap_t)mem; + + map->dmat->map_count--; +} + +static void +busdma_init(void *dummy) +{ + + /* Create a cache of maps for bus_dmamap_create(). */ + dmamap_zone = uma_zcreate("dma maps", sizeof(struct bus_dmamap), + dmamap_ctor, dmamap_dtor, NULL, NULL, UMA_ALIGN_PTR, 0); + + /* Create a cache of buffers in standard (cacheable) memory. */ + standard_allocator = busdma_bufalloc_create("buffer", + mips_dcache_max_linesize, /* minimum_alignment */ + NULL, /* uma_alloc func */ + NULL, /* uma_free func */ + 0); /* uma_zcreate_flags */ + + /* + * Create a cache of buffers in uncacheable memory, to implement the + * BUS_DMA_COHERENT flag. + */ + coherent_allocator = busdma_bufalloc_create("coherent", + mips_dcache_max_linesize, /* minimum_alignment */ + busdma_bufalloc_alloc_uncacheable, + busdma_bufalloc_free_uncacheable, + 0); /* uma_zcreate_flags */ +} +SYSINIT(busdma, SI_SUB_KMEM, SI_ORDER_FOURTH, busdma_init, NULL); + +/* + * Return true if a match is made. + * + * To find a match walk the chain of bus_dma_tag_t's looking for 'paddr'. + * + * If paddr is within the bounds of the dma tag then call the filter callback + * to check for a match, if there is no filter callback then assume a match. + */ +static int +run_filter(bus_dma_tag_t dmat, bus_addr_t paddr) +{ + int retval; + + retval = 0; + + do { + if (((paddr > dmat->lowaddr && paddr <= dmat->highaddr) + || !vm_addr_align_ok(paddr, dmat->alignment)) + && (dmat->filter == NULL + || (*dmat->filter)(dmat->filterarg, paddr) != 0)) + retval = 1; + + dmat = dmat->parent; + } while (retval == 0 && dmat != NULL); + return (retval); +} + +/* + * Check to see if the specified page is in an allowed DMA range. + */ + +static __inline int +_bus_dma_can_bounce(vm_offset_t lowaddr, vm_offset_t highaddr) +{ + int i; + for (i = 0; phys_avail[i] && phys_avail[i + 1]; i += 2) { + if ((lowaddr >= phys_avail[i] && lowaddr <= phys_avail[i + 1]) + || (lowaddr < phys_avail[i] && + highaddr > phys_avail[i])) + return (1); + } + return (0); +} + +/* + * Convenience function for manipulating driver locks from busdma (during + * busdma_swi, for example). + */ +void +busdma_lock_mutex(void *arg, bus_dma_lock_op_t op) +{ + struct mtx *dmtx; + + dmtx = (struct mtx *)arg; + switch (op) { + case BUS_DMA_LOCK: + mtx_lock(dmtx); + break; + case BUS_DMA_UNLOCK: + mtx_unlock(dmtx); + break; + default: + panic("Unknown operation 0x%x for busdma_lock_mutex!", op); + } +} + +/* + * dflt_lock should never get called. It gets put into the dma tag when + * lockfunc == NULL, which is only valid if the maps that are associated + * with the tag are meant to never be defered. + * XXX Should have a way to identify which driver is responsible here. + */ +static void +dflt_lock(void *arg, bus_dma_lock_op_t op) +{ +#ifdef INVARIANTS + panic("driver error: busdma dflt_lock called"); +#else + printf("DRIVER_ERROR: busdma dflt_lock called\n"); +#endif +} + +static __inline bus_dmamap_t +_busdma_alloc_dmamap(bus_dma_tag_t dmat) +{ + struct sync_list *slist; + bus_dmamap_t map; + + slist = malloc(sizeof(*slist) * dmat->nsegments, M_BUSDMA, M_NOWAIT); + if (slist == NULL) + return (NULL); + map = uma_zalloc_arg(dmamap_zone, dmat, M_NOWAIT); + if (map != NULL) + map->slist = slist; + else + free(slist, M_BUSDMA); + return (map); +} + +static __inline void +_busdma_free_dmamap(bus_dmamap_t map) +{ + + free(map->slist, M_BUSDMA); + uma_zfree(dmamap_zone, map); +} + +/* + * Allocate a device specific dma_tag. + */ +#define SEG_NB 1024 + +int +bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment, + bus_addr_t boundary, bus_addr_t lowaddr, + bus_addr_t highaddr, bus_dma_filter_t *filter, + void *filterarg, bus_size_t maxsize, int nsegments, + bus_size_t maxsegsz, int flags, bus_dma_lock_t *lockfunc, + void *lockfuncarg, bus_dma_tag_t *dmat) +{ + bus_dma_tag_t newtag; + int error = 0; + /* Return a NULL tag on failure */ + *dmat = NULL; + if (!parent) + parent = mips_root_dma_tag; + + newtag = (bus_dma_tag_t)malloc(sizeof(*newtag), M_BUSDMA, M_NOWAIT); + if (newtag == NULL) { + CTR4(KTR_BUSDMA, "%s returned tag %p tag flags 0x%x error %d", + __func__, newtag, 0, error); + return (ENOMEM); + } + + newtag->parent = parent; + newtag->alignment = alignment; + newtag->boundary = boundary; + newtag->lowaddr = trunc_page((vm_offset_t)lowaddr) + (PAGE_SIZE - 1); + newtag->highaddr = trunc_page((vm_offset_t)highaddr) + (PAGE_SIZE - 1); + newtag->filter = filter; + newtag->filterarg = filterarg; + newtag->maxsize = maxsize; + newtag->nsegments = nsegments; + newtag->maxsegsz = maxsegsz; + newtag->flags = flags; + if (cpuinfo.cache_coherent_dma) + newtag->flags |= BUS_DMA_COHERENT; + newtag->ref_count = 1; /* Count ourself */ + newtag->map_count = 0; + if (lockfunc != NULL) { + newtag->lockfunc = lockfunc; + newtag->lockfuncarg = lockfuncarg; + } else { + newtag->lockfunc = dflt_lock; + newtag->lockfuncarg = NULL; + } + newtag->segments = NULL; + + /* + * Take into account any restrictions imposed by our parent tag + */ + if (parent != NULL) { + newtag->lowaddr = MIN(parent->lowaddr, newtag->lowaddr); + newtag->highaddr = MAX(parent->highaddr, newtag->highaddr); + if (newtag->boundary == 0) + newtag->boundary = parent->boundary; + else if (parent->boundary != 0) + newtag->boundary = + MIN(parent->boundary, newtag->boundary); + if ((newtag->filter != NULL) || + ((parent->flags & BUS_DMA_COULD_BOUNCE) != 0)) + newtag->flags |= BUS_DMA_COULD_BOUNCE; + if (newtag->filter == NULL) { + /* + * Short circuit looking at our parent directly + * since we have encapsulated all of its information + */ + newtag->filter = parent->filter; + newtag->filterarg = parent->filterarg; + newtag->parent = parent->parent; + } + if (newtag->parent != NULL) + atomic_add_int(&parent->ref_count, 1); + } + if (_bus_dma_can_bounce(newtag->lowaddr, newtag->highaddr) + || newtag->alignment > 1) + newtag->flags |= BUS_DMA_COULD_BOUNCE; + + if (((newtag->flags & BUS_DMA_COULD_BOUNCE) != 0) && + (flags & BUS_DMA_ALLOCNOW) != 0) { + struct bounce_zone *bz; + + /* Must bounce */ + + if ((error = alloc_bounce_zone(newtag)) != 0) { + free(newtag, M_BUSDMA); + return (error); + } + bz = newtag->bounce_zone; + + if (ptoa(bz->total_bpages) < maxsize) { + int pages; + + pages = atop(maxsize) - bz->total_bpages; + + /* Add pages to our bounce pool */ + if (alloc_bounce_pages(newtag, pages) < pages) + error = ENOMEM; + } + /* Performed initial allocation */ + newtag->flags |= BUS_DMA_MIN_ALLOC_COMP; + } else + newtag->bounce_zone = NULL; + if (error != 0) + free(newtag, M_BUSDMA); + else + *dmat = newtag; + CTR4(KTR_BUSDMA, "%s returned tag %p tag flags 0x%x error %d", + __func__, newtag, (newtag != NULL ? newtag->flags : 0), error); + + return (error); +} + +void +bus_dma_template_clone(bus_dma_template_t *t, bus_dma_tag_t dmat) +{ + + if (t == NULL || dmat == NULL) + return; + + t->parent = dmat->parent; + t->alignment = dmat->alignment; + t->boundary = dmat->boundary; + t->lowaddr = dmat->lowaddr; + t->highaddr = dmat->highaddr; + t->maxsize = dmat->maxsize; + t->nsegments = dmat->nsegments; + t->maxsegsize = dmat->maxsegsz; + t->flags = dmat->flags; + t->lockfunc = dmat->lockfunc; + t->lockfuncarg = dmat->lockfuncarg; +} + +int +bus_dma_tag_set_domain(bus_dma_tag_t dmat, int domain) +{ + + return (0); +} + +int +bus_dma_tag_destroy(bus_dma_tag_t dmat) +{ +#ifdef KTR + bus_dma_tag_t dmat_copy = dmat; +#endif + + if (dmat != NULL) { + if (dmat->map_count != 0) + return (EBUSY); + + while (dmat != NULL) { + bus_dma_tag_t parent; + + parent = dmat->parent; + atomic_subtract_int(&dmat->ref_count, 1); + if (dmat->ref_count == 0) { + if (dmat->segments != NULL) + free(dmat->segments, M_BUSDMA); + free(dmat, M_BUSDMA); + /* + * Last reference count, so + * release our reference + * count on our parent. + */ + dmat = parent; + } else + dmat = NULL; + } + } + CTR2(KTR_BUSDMA, "%s tag %p", __func__, dmat_copy); + + return (0); +} + +#include +/* + * Allocate a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +int +bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp) +{ + bus_dmamap_t newmap; + int error = 0; + + if (dmat->segments == NULL) { + dmat->segments = (bus_dma_segment_t *)malloc( + sizeof(bus_dma_segment_t) * dmat->nsegments, M_BUSDMA, + M_NOWAIT); + if (dmat->segments == NULL) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", + __func__, dmat, ENOMEM); + return (ENOMEM); + } + } + + newmap = _busdma_alloc_dmamap(dmat); + if (newmap == NULL) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", __func__, dmat, ENOMEM); + return (ENOMEM); + } + *mapp = newmap; + + /* + * Bouncing might be required if the driver asks for an active + * exclusion region, a data alignment that is stricter than 1, and/or + * an active address boundary. + */ + if (dmat->flags & BUS_DMA_COULD_BOUNCE) { + /* Must bounce */ + struct bounce_zone *bz; + int maxpages; + + if (dmat->bounce_zone == NULL) { + if ((error = alloc_bounce_zone(dmat)) != 0) { + _busdma_free_dmamap(newmap); + *mapp = NULL; + return (error); + } + } + bz = dmat->bounce_zone; + + /* Initialize the new map */ + STAILQ_INIT(&((*mapp)->bpages)); + + /* + * Attempt to add pages to our pool on a per-instance + * basis up to a sane limit. + */ + maxpages = MAX_BPAGES; + if ((dmat->flags & BUS_DMA_MIN_ALLOC_COMP) == 0 + || (bz->map_count > 0 && bz->total_bpages < maxpages)) { + int pages; + + pages = MAX(atop(dmat->maxsize), 1); + pages = MIN(maxpages - bz->total_bpages, pages); + pages = MAX(pages, 1); + if (alloc_bounce_pages(dmat, pages) < pages) + error = ENOMEM; + + if ((dmat->flags & BUS_DMA_MIN_ALLOC_COMP) == 0) { + if (error == 0) + dmat->flags |= BUS_DMA_MIN_ALLOC_COMP; + } else { + error = 0; + } + } + bz->map_count++; + } + + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->flags, error); + + return (0); +} + +/* + * Destroy a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +int +bus_dmamap_destroy(bus_dma_tag_t dmat, bus_dmamap_t map) +{ + + if (STAILQ_FIRST(&map->bpages) != NULL || map->sync_count != 0) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", + __func__, dmat, EBUSY); + return (EBUSY); + } + if (dmat->bounce_zone) + dmat->bounce_zone->map_count--; + _busdma_free_dmamap(map); + CTR2(KTR_BUSDMA, "%s: tag %p error 0", __func__, dmat); + return (0); +} + +/* + * Allocate a piece of memory that can be efficiently mapped into + * bus device space based on the constraints lited in the dma tag. + * A dmamap to for use with dmamap_load is also allocated. + */ +int +bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddrp, int flags, + bus_dmamap_t *mapp) +{ + bus_dmamap_t newmap = NULL; + busdma_bufalloc_t ba; + struct busdma_bufzone *bufzone; + vm_memattr_t memattr; + void *vaddr; + + int mflags; + + if (flags & BUS_DMA_NOWAIT) + mflags = M_NOWAIT; + else + mflags = M_WAITOK; + if (dmat->segments == NULL) { + dmat->segments = (bus_dma_segment_t *)malloc( + sizeof(bus_dma_segment_t) * dmat->nsegments, M_BUSDMA, + mflags); + if (dmat->segments == NULL) { + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->flags, ENOMEM); + return (ENOMEM); + } + } + + newmap = _busdma_alloc_dmamap(dmat); + if (newmap == NULL) { + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->flags, ENOMEM); + return (ENOMEM); + } + + /* + * If all the memory is coherent with DMA then we don't need to + * do anything special for a coherent mapping request. + */ + if (dmat->flags & BUS_DMA_COHERENT) + flags &= ~BUS_DMA_COHERENT; + + if (flags & BUS_DMA_COHERENT) { + memattr = VM_MEMATTR_UNCACHEABLE; + ba = coherent_allocator; + newmap->flags |= DMAMAP_UNCACHEABLE; + } else { + memattr = VM_MEMATTR_DEFAULT; + ba = standard_allocator; + } + /* All buffers we allocate are cache-aligned. */ + newmap->flags |= DMAMAP_CACHE_ALIGNED; + + if (flags & BUS_DMA_ZERO) + mflags |= M_ZERO; + + /* + * Try to find a bufzone in the allocator that holds a cache of buffers + * of the right size for this request. If the buffer is too big to be + * held in the allocator cache, this returns NULL. + */ + bufzone = busdma_bufalloc_findzone(ba, dmat->maxsize); + + /* + * Allocate the buffer from the uma(9) allocator if... + * - It's small enough to be in the allocator (bufzone not NULL). + * - The alignment constraint isn't larger than the allocation size + * (the allocator aligns buffers to their size boundaries). + * - There's no need to handle lowaddr/highaddr exclusion zones. + * else allocate non-contiguous pages if... + * - The page count that could get allocated doesn't exceed + * nsegments also when the maximum segment size is less + * than PAGE_SIZE. + * - The alignment constraint isn't larger than a page boundary. + * - There are no boundary-crossing constraints. + * else allocate a block of contiguous pages because one or more of the + * constraints is something that only the contig allocator can fulfill. + */ + if (bufzone != NULL && dmat->alignment <= bufzone->size && + !_bus_dma_can_bounce(dmat->lowaddr, dmat->highaddr)) { + vaddr = uma_zalloc(bufzone->umazone, mflags); + } else if (dmat->nsegments >= + howmany(dmat->maxsize, MIN(dmat->maxsegsz, PAGE_SIZE)) && + dmat->alignment <= PAGE_SIZE && + (dmat->boundary % PAGE_SIZE) == 0) { + vaddr = (void *)kmem_alloc_attr(dmat->maxsize, mflags, 0, + dmat->lowaddr, memattr); + } else { + vaddr = (void *)kmem_alloc_contig(dmat->maxsize, mflags, 0, + dmat->lowaddr, dmat->alignment, dmat->boundary, memattr); + } + if (vaddr == NULL) { + _busdma_free_dmamap(newmap); + newmap = NULL; + } else { + newmap->sync_count = 0; + } + *vaddrp = vaddr; + *mapp = newmap; + + return (vaddr == NULL ? ENOMEM : 0); +} + +/* + * Free a piece of memory and it's allocated dmamap, that was allocated + * via bus_dmamem_alloc. Make the same choice for free/contigfree. + */ +void +bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map) +{ + struct busdma_bufzone *bufzone; + busdma_bufalloc_t ba; + + if (map->flags & DMAMAP_UNCACHEABLE) + ba = coherent_allocator; + else + ba = standard_allocator; + + free(map->slist, M_BUSDMA); + uma_zfree(dmamap_zone, map); + + bufzone = busdma_bufalloc_findzone(ba, dmat->maxsize); + + if (bufzone != NULL && dmat->alignment <= bufzone->size && + !_bus_dma_can_bounce(dmat->lowaddr, dmat->highaddr)) + uma_zfree(bufzone->umazone, vaddr); + else + kmem_free((vm_offset_t)vaddr, dmat->maxsize); + CTR3(KTR_BUSDMA, "%s: tag %p flags 0x%x", __func__, dmat, dmat->flags); +} + +static void +_bus_dmamap_count_phys(bus_dma_tag_t dmat, bus_dmamap_t map, vm_paddr_t buf, + bus_size_t buflen, int flags) +{ + bus_addr_t curaddr; + bus_size_t sgsize; + + if (map->pagesneeded == 0) { + CTR3(KTR_BUSDMA, "lowaddr= %d, boundary= %d, alignment= %d", + dmat->lowaddr, dmat->boundary, dmat->alignment); + CTR2(KTR_BUSDMA, "map= %p, pagesneeded= %d", *** 1109 LINES SKIPPED ***