From nobody Mon Apr 22 22:05:15 2024 X-Original-To: freebsd-hackers@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4VNfSn39bPz5Hlbp for ; Mon, 22 Apr 2024 22:05:29 +0000 (UTC) (envelope-from asomers@gmail.com) Received: from mail-io1-f45.google.com (mail-io1-f45.google.com [209.85.166.45]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "smtp.gmail.com", Issuer "GTS CA 1D4" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4VNfSn1MxTz58P5 for ; Mon, 22 Apr 2024 22:05:29 +0000 (UTC) (envelope-from asomers@gmail.com) Authentication-Results: mx1.freebsd.org; none Received: by mail-io1-f45.google.com with SMTP id ca18e2360f4ac-7d9480d96bdso258157939f.1 for ; Mon, 22 Apr 2024 15:05:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713823527; x=1714428327; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aAEqVlnUvuujTVODFsf3vstUM8uB9THIEIklkd8qHqg=; b=LkkkE3b7FFIoigFexddGpLV4iE+Hd0zJf9uNTyXpHipTqfGfx5PW2bcS0PYrT2mEx3 p/mxH5L8ofQhHmesDMr0maee2PW81vNPHkWE1zFvm4NkHy5dl2/zdj2hvrr1oWNN+eLU 6c2ySG/6H/wS1qgmPw3kbTp9ELZ/qZ+dbXnpINmegnxIodynnhZDea82wjrbWg1FNV0/ wFi9ORhUtuQTSgBKvbEZA1VLcvU825D2NWx7S8PiINMVCYHT3U5xIxmHon/kO131Ipyv 5X1jjti74ixN+0dt4A1vuA1mYxF6DOG25VyBTRdHwoA/2Ozz06HA41uNdE8QuUBeoi+w 95xw== X-Gm-Message-State: AOJu0YxsYQbNs7I4dDGBA32iJtwsf5S7hdrZecYH7Ub5WtrzgxcSgxMa 1UomsheTl3g0H7PyaWfXB50f+h9dWmDvIYlaYj30HmoqfZamu4cl3GaHD9GFBT1isdbFfcJSK0x sUcl1ofWhLFfp5IuvA2xoIIAEb6H1mg== X-Google-Smtp-Source: AGHT+IHRXD6OKzjbRkutsxblQMJnPBWM4DHRsDjQgtJ0H4O0pDyLUhHCydPoGXcZV6Kq16BJhRuqh2wVWXv45NlE2QQ= X-Received: by 2002:a92:4a02:0:b0:36b:ffc1:d1bf with SMTP id m2-20020a924a02000000b0036bffc1d1bfmr11792798ilf.18.1713823527516; Mon, 22 Apr 2024 15:05:27 -0700 (PDT) List-Id: Technical discussions relating to FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-hackers List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-hackers@FreeBSD.org MIME-Version: 1.0 References: In-Reply-To: From: Alan Somers Date: Mon, 22 Apr 2024 16:05:15 -0600 Message-ID: Subject: Re: Stressing malloc(9) To: Karl Denninger Cc: freebsd-hackers@freebsd.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:15169, ipnet:209.85.128.0/17, country:US] X-Rspamd-Queue-Id: 4VNfSn1MxTz58P5 On Mon, Apr 22, 2024 at 2:07=E2=80=AFPM Karl Denninger = wrote: > > On 4/22/2024 12:46, Alan Somers wrote: > > When I said "33kiB" I meant "33 pages", or 132 kB. And the solution > turns out to be very easy. Since I'm using ZFS on top of geli, with > the default recsize of 128kB, I'll just set > vfs.zfs.vdev.aggregation_limit to 128 kB. That way geli will never > need to allocate more than 128kB contiguously. ZFS doesn't even need > those big allocations to be contiguous; it's just aggregating smaller > operations to reduce disk IOPs. But aggregating up to 1MB (the > default) is overkill; any rotating HDD should easily be able to max > out its consecutive write IOPs with 128kB operation size. I'll add a > read-only sysctl for g_eli_alloc_sz too. Thanks Mark. > > -Alan > > Setting this on one of my production machines that uses zfs behind geli d= rops the load average quite materially with zero impact on throughput that = I can see (thus far.) I will run this for a while but it certainly doesn't= appear to have any negatives associated with it and does appear to improve= efficiency quite a bit. Great news! Also, FTR I should add that this advice only applies to people who use HDDs. For SSDs zfs uses a different aggregation limit, and the default value is already low enough. -Alan