Re: The Case for Rust (in the base system)
Date: Mon, 02 Sep 2024 23:56:24 UTC
Fedora defaulting to btrfs because they treat their users as guinea pigs is a rather dramatic way to put it.
By definition anyone which uses a rolling distro is a guinea pig, and it's not really a bad thing since people know and choose this.
Ubuntu non LTS releases are just the same.
Additionally, people let warnings go through their patches(mind you, without Wall/Wextra/Wpedantic enabled), let alone not run "make check".
If only people bothered using the mature ecosystem of tools around C.
-------- Original Message --------
On 9/3/24 02:39, Steffen Nurpmeso <steffen@sdaoden.eu> wrote:
> Tomek CEDRO wrote in
> <CAFYkXjmURZwBbrFL=uWT+DZ6h_qjjpoucxW4-3CpDhKn3XX2gg@mail.gmail.com>:
> |Rust for Linux maintainer steps down in frustration with 'nontechnical
> |nonsense'.
> |
> |Community seems to C Rust more as a burden than a benefit
>
> All these filesystem maintainers said that if they fix bugs and do
> stuff, they do that in C, and the Rust layer has to follow, as
> opposed to the opposite, ie, that the filesystem maintainers must
> henceforth fix bugs and do development in a way that matches Rust
> expectations .. which i find somehow understandable.
>
> (Not even taking T'so's "writing a filesystem is easy, but
> creating a [enterprise] filesystem is very, very difficult (and
> i know what i am talking about / i know that by experience / xy)"
> is more or less a quote of him.
> Exchange "enterprise" with something of that level, you know.
> Wait, i can search for the email. For example
>
> Date: Sun, 29 Aug 2021 23:14:28 -0400
> Message-ID: <YSxNFKq9r3dyHT7l@mit.edu>
>
> The ext2/ext3/ext4 file system utilities is as far as I know the first
> fsck that was developed with a full regression test suite from the
> very beginning and integrated into the sources. (Just run "make
> check" and you'll know if you broken something --- or it's how I know
> the person contributing code was sloppy and didn't bother to run
> "make check" before sending me patches to review....)
>
> What a lot of people don't seem to understand is that file system
> utilities are *important*, and more work than you might think. The
> ext4 file system is roughly 71 kLOC (thousand lines of code) in the
> kernel. E2fsprogs is 340 kLOC. In contrast, the btrfs kernel code is
> 145 kLOC (btrfs does have a lot more "sexy new features"), but its
> btrfs-progs utilities is currently only 124 kLOC.
>
> And the e2fsprogs line count doesn't include the 350+ library of
> corrupted file system images that are part of its regression test
> suite. Btrfs has a few unit tests (as does e2fsprogs), but it doesn't
> have any thing similar in terms of a library corrupted file system
> images to test its fsck functionality. (Then again, neither does the
> file system utilities for FFS, so a regression test suite is not
> required to create a high quality fsck program. In my opinion, it
> very much helps, though!)
> [.]
> I was present at the very beginning of btrfs. In November, 2007,
> various file system developers from a number of the big IBM companies
> got together (IBM, Intel, HP, Red Hat, etc.) and folks decided that
> Linux "needed an answer to ZFS". In preparation for that meeting, I
> did some research asking various contacts I had at various companies
> how much effort and how long it took to create a new file system from
> scratch and make it be "enterprise ready". I asked folks at Digital
> how long it took for advfs, IBM for AIX and GPFS, etc., etc. And the
> answer I got back at that time was between 50 and 200 Person Years,
> with the bulk of the answers being between 100-200 PY's (the single
> 50PY estimate was an outlier). This was everything --- kernel and
> userspace coding, testing and QA, performance tuning, documentation,
> etc. etc. The calendar-time estimates I was given was between 5-7
> calendar years, and even then, users would take at least another 2-3
> years minimum of "kicking the tires", before they would trust *their*
> precious enterprise data on the file system.
>
> There was an Intel engineer at that meeting, who shall remain
> nameless, who said, "Don't tell the managers that or they'll never
> greenlight the project! Tell them 18 months...."
>
> And so I and other developers at IBM, continued working on ext4, which
> we never expected would be able to compete with btrfs and ZFS in terms
> of "sexy new features", but our focus was on performance, scalability,
> and robustness.
>
> And it probably was about 2015 or so that btrfs finally became more or
> less stable, but only if you restricted yourself to core
> functionality. (e.g., snapshots, file-system level RAID, etc., was
> still dodgy at the time.)
>
>
> I will say that at Google, ext4 is still our primary file system,
> mainly because all of our expertise is currently focused there. We
> are starting to support XFS in "beta" ("Preview") for Cloud Optimized
> OS, since there are some enterprise customers which are using XFS on
> their systems, and they want to continue using XFS as they migrate
> from on-prem to the Cloud. We fully support XFS for Anthos Migrate
> (which is a read-mostly workload), and we're still building our
> expertise, working on getting bug fixes backported, etc., so we can
> support XFS the way enterprises expect for Cloud Optimized OS, which
> is our high-security, ChromeOS based Linux distribution with a
> read-only, cryptographically signed root file system optimized for
> Docker and Kubernetes workloads.
>
> I'm not aware of any significant enterprise usage of btrfs, which is
> why we're not bothering to support btrfs at $WORK. The only big
> company which is using btrfs in production that I know of is Facebook,
> because they have a bunch of btrfs developers, but even there, they
> aren't using btrfs exclusively for all of their workloads.
>
> My understanding of why Fedora decided to make btrfs the default was
> because they wanted to get more guinea pigs to flush out the bugs.
> Note that Red Hat, which is responsible for Red Hat Enterprise Linux
> (their paid product, where they make $$$) and Fedora, which is their
> freebie "community distribution" --- Well, Red Hat does not currently
> support btrfs for their RHEL product.
>
> Make of that what you will....
>
> As well as
>
> Date: Sun, 29 Aug 2021 23:46:47 -0400
> Message-ID: <YSxUpxoVnUquMwOz@mit.edu>
>
> [.]
> Actually, the btrfs folks got that from ext2/ext3/ext4. The original
> behavior was "don't worry, be happy" (log errors and continue), and I
> added two additional options, "remount read-only", and "panic and
> reboot the system". I recommend the last especially for high
> availability systems, since you can then fail over to the secondary
> system, and fsck can repair the file system on the reboot path.
>
>
> The primary general-purpose file systems in Linux which are under
> active development these days are btrfs, ext4, f2fs, and xfs. They
> all have slightly different focus areas. For example, f2fs is best
> for low-end flash, the kind that is find on $30 dollar mobile handsets
> on sale in countries like India (aka, "the next billion users"). It
> has deep knowledge of "cost-optimized" flash where random writes are
> to be avoided at all costs because write amplification is a terrible
> thing with very primitive FTL's.
>
> For very large file systems (e.g., large RAID arrays with pedabytes of
> data), XFS will probably do better than ext4 for many workloads.
>
> Btrfs is the file systems for users who have ZFS envy. I believe many
> of those sexy new features are best done at other layers in the
> storage stack, but if you *really* want file-system level snapshots
> and rollback, btrfs is the only game in town for Linux. (Unless of
> course you don't mind using ZFS and hope that Larry Ellison won't sue
> the bejesus out of you, and if you don't care about potential GPL
> violations....)
>
> Ext4 is still getting new features added; we recently added a
> light-weight journaling (a simplified version of the 2017 Usenix ATC
> iJournaling paper[1]), and just last week we've added parallelized
> orphan list called Orphan File[2] which optimizes parallel truncate
> and unlink workloads. (Neither of these features are enabled by
> default yet, because maybe in a few years, or earlier if community
> distros want to volunteer their users to be guinea pigs. :-)
>
> [1] https://www.usenix.org/system/files/conference/atc17/atc17-park.pdf
> [2] https://www.spinics.net/lists/linux-ext4/msg79021.html
>
> We currently aren't adding the "sexy new features" of btrfs or ZFS,
> but that's mainly because there isn't a business justification to pay
> for the engineering effort needed to add them. I have some design
> sketches of how we *could* add them to ext4, but most of the ext4
> developers like food with our meals, and I'm still a working stiff so
> I focus on work that adds value to my employer --- and, of course,
> helping other ext4 developers working at other companies figure out
> ways to justify new features that would add value to *their*
> employers.
>
> I might work on some sexy new features if I won the Powerball Lottery
> and could retire rich, or I was working at company where engineers
> could work on whatever technologies they wanted without getting
> permission from the business types, but those companies tend not to
> end well (especially after they get purchased by Oracle....)
>
> Ok granted that is not what i said, but i am sure there was
> something around that lines in some message at some time.)
>
> |https://www.theregister.com/2024/09/02/rust_for_linux_maintainer_steps_d\
> |own/
> |--
> |CeDeROM, SQ7MHZ, http://www.tomek.cedro.info
> ...
> --End of <CAFYkXjmURZwBbrFL=uWT+DZ6h_qjjpoucxW4-3CpDhKn3XX2gg@mail.gmail\
> .com>
>
> --steffen
> |
> |Der Kragenbaer, The moon bear,
> |der holt sich munter he cheerfully and one by one
> |einen nach dem anderen runter wa.ks himself off
> |(By Robert Gernhardt)
>
>