git: 1b425afa8d5f - main - cache: avoid hardcoded CACHE_PATH_CUTOFF
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 09 Jan 2026 14:16:19 UTC
The branch main has been updated by brooks:
URL: https://cgit.FreeBSD.org/src/commit/?id=1b425afa8d5fb611b17f951e5099c177fd97ffea
commit 1b425afa8d5fb611b17f951e5099c177fd97ffea
Author: Brooks Davis <brooks@FreeBSD.org>
AuthorDate: 2026-01-09 14:15:43 +0000
Commit: Brooks Davis <brooks@FreeBSD.org>
CommitDate: 2026-01-09 14:15:43 +0000
cache: avoid hardcoded CACHE_PATH_CUTOFF
Compute the cutoff at compile time instead which will avoid the need
for a CHERI case.
Correct the comment about CACHE_PATH_CUTOFF. It dates to 5.2.0 not 4.4
BSD. Historic values are:
32 - introduced (c2935410f6d5f)
35 - NUL terminated and bumped (5d5c174869104)
39 - sized to alignment (bb48255cf5eff)
45 - bumped to improve efficency on 64-bit (3862838921eb8)
No functional change.
Reviewed by: markj
Suggested by: jhb
Effort: CHERI upstreaming
Sponsored by: DARPA, AFRL
Differential Revision: https://reviews.freebsd.org/D54554
---
sys/kern/vfs_cache.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/sys/kern/vfs_cache.c b/sys/kern/vfs_cache.c
index 404234861710..3f8591bd0ba7 100644
--- a/sys/kern/vfs_cache.c
+++ b/sys/kern/vfs_cache.c
@@ -406,31 +406,35 @@ TAILQ_HEAD(cache_freebatch, namecache);
#define CACHE_ZONE_ALIGN_MASK UMA_ALIGNOF(struct namecache_ts)
/*
- * TODO: the initial value of CACHE_PATH_CUTOFF was inherited from the
- * 4.4 BSD codebase. Later on struct namecache was tweaked to become
- * smaller and the value was bumped to retain the total size, but it
- * was never re-evaluated for suitability. A simple test counting
- * lengths during package building shows that the value of 45 covers
- * about 86% of all added entries, reaching 99% at 65.
+ * TODO: CACHE_PATH_CUTOFF was initially introduced with an arbitrary
+ * value of 32 in FreeBSD 5.2.0. It was bumped to 35 and the path was
+ * NUL terminated with the introduction of DTrace probes. Later, it was
+ * expanded to match the alignment allowing an increase to 39, but it
+ * was not re-evaluated for suitability. It was again bumped to 45 on
+ * 64-bit systems and 41 on 32-bit systems (the current values, now
+ * computed at compile time rather than hardcoded). A simple test
+ * counting lengths during package building in 2020 showed that the
+ * value of 45 covers about 86% of all added entries, reaching 99%
+ * at 65.
*
* Regardless of the above, use of dedicated zones instead of malloc may be
* inducing additional waste. This may be hard to address as said zones are
* tied to VFS SMR. Even if retaining them, the current split should be
* re-evaluated.
*/
-#ifdef __LP64__
-#define CACHE_PATH_CUTOFF 45
-#else
-#define CACHE_PATH_CUTOFF 41
-#endif
+#define CACHE_PATH_CUTOFF_MIN 40
+#define CACHE_STRUCT_LEN(pathlen) \
+ (offsetof(struct namecache, nc_name) + (pathlen) + 1)
+#define CACHE_PATH_CUTOFF \
+ (roundup2(CACHE_STRUCT_LEN(CACHE_PATH_CUTOFF_MIN), \
+ _Alignof(struct namecache_ts)) - CACHE_STRUCT_LEN(0))
#define CACHE_ZONE_SMALL_SIZE \
- (offsetof(struct namecache, nc_name) + CACHE_PATH_CUTOFF + 1)
+ CACHE_STRUCT_LEN(CACHE_PATH_CUTOFF)
#define CACHE_ZONE_SMALL_TS_SIZE \
(offsetof(struct namecache_ts, nc_nc) + CACHE_ZONE_SMALL_SIZE)
#define CACHE_ZONE_LARGE_SIZE \
- roundup2(offsetof(struct namecache, nc_name) + NAME_MAX + 1, \
- _Alignof(struct namecache_ts))
+ roundup2(CACHE_STRUCT_LEN(NAME_MAX), _Alignof(struct namecache_ts))
#define CACHE_ZONE_LARGE_TS_SIZE \
(offsetof(struct namecache_ts, nc_nc) + CACHE_ZONE_LARGE_SIZE)