ZFS...

Michelle Sullivan michelle at sorbs.net
Wed May 8 00:26:16 UTC 2019


Paul Mather wrote:
> On May 7, 2019, at 1:02 AM, Michelle Sullivan <michelle at sorbs.net> wrote:
>
>>> On 07 May 2019, at 10:53, Paul Mather <paul at gromit.dlib.vt.edu> wrote:
>>>
>>>> On May 6, 2019, at 10:14 AM, Michelle Sullivan <michelle at sorbs.net> 
>>>> wrote:
>>>>
>>>> My issue here (and not really what the blog is about) FreeBSD is 
>>>> defaulting to it.
>>>
>>> You've said this at least twice now in this thread so I'm assuming 
>>> you're asserting it to be true.
>>>
>>> As of FreeBSD 12.0-RELEASE (and all earlier releases), FreeBSD does 
>>> NOT default to ZFS.
>>>
>>> The images distributed by freebsd.org, e.g., Vagrant boxes, ARM 
>>> images, EC2 instances, etc., contain disk images where FreeBSD 
>>> resides on UFS.  For example, here's what you end up with when you 
>>> launch a 12.0-RELEASE instance using defaults on AWS (us-east-1 
>>> region: ami-03b0f822e17669866):
>>>
>>> root at freebsd:/usr/home/ec2-user # gpart show
>>> =>       3  20971509  ada0  GPT  (10G)
>>>         3       123     1  freebsd-boot  (62K)
>>>       126  20971386     2  freebsd-ufs  (10G)
>>>
>>> And this is what you get when you "vagrant up" the 
>>> freebsd/FreeBSD-12.0-RELEASE box:
>>>
>>> root at freebsd:/home/vagrant # gpart show
>>> =>       3  65013755  ada0  GPT  (31G)
>>>         3       123     1  freebsd-boot  (62K)
>>>       126   2097152     2  freebsd-swap  (1.0G)
>>>   2097278  62914560     3  freebsd-ufs  (30G)
>>>  65011838      1920        - free -  (960K)
>>>
>>>
>>> When you install from the 12.0-RELEASE ISO, the first option listed 
>>> during the partitioning stage is "Auto (UFS)  Guided Disk Setup".  
>>> The last option listed---after "Open a shell and partition by hand" 
>>> is "Auto (ZFS)  Guided Root-on-ZFS".  In other words, you have to 
>>> skip over UFS and manual partitioning to select the ZFS install option.
>>>
>>> So, I don't see what evidence there is that FreeBSD is defaulting to 
>>> ZFS.  It hasn't up to now. Will FreeBSD 13 default to ZFS?
>>
>> Umm.. well I install by memory stick images and I had a 10.2 and an 
>> 11.0 both of which had root on zfs as the default.. I had to manually 
>> change them.  I haven’t looked at anything later... so did something 
>> change?  Am I in cloud cookoo land?
>
>
> I don't know about that, but you may well be misremembering.  I just 
> pulled down the 10.2 and 11.0 installers from 
> http://ftp-archive.freebsd.org/pub/FreeBSD-Archive/old-releases and in 
> both cases the choices listed in the "Partitioning" step are the same 
> as in the current 12.0 installer: "Auto (UFS)  Guided Disk Setup" is 
> listed first and selected by default.  "Auto (ZFS) Guided Root-on-ZFS" 
> is listed last (you have to skip past other options such as manually 
> partitioning by hand to select it).
>
> I'm confident in saying that ZFS is (or was) not the default 
> partitioning option in either 10.2 or 11.0 as officially released by 
> FreeBSD.
>
> Did you use a custom installer you made yourself when installing 10.2 
> or 11.0?

it was an emergency USB stick.. so downloaded straight from the website.

My process is boot, select "manual" (so I can set single partition and a 
swap partition as historically it's done other things) select the whole 
disk and create partition - this is where I saw it... 'freebsd-zfs' as 
the default. Second 'create' defaults to 'freebsd-swap' which is always 
correct.  Interestingly the -CURRENT installer just says, "freebsd" and 
not either -ufs or -zfs ... what ever that defaults to I don't know.
>
>
>>
>>>> FreeBSD used to be targeted at enterprise and devs (which is where 
>>>> I found it)... however the last few years have been a big push into 
>>>> the consumer (compete with Linux) market.. so you have an OS that 
>>>> concerns itself with the desktop and upgrade after upgrade after 
>>>> upgrade (not just patching security issues, but upgrades as well.. 
>>>> just like windows and OSX)... I get it.. the money is in the 
>>>> keeping of the user base.. but then you install a file system which 
>>>> is dangerous on a single disk by default... dangerous because it’s 
>>>> trusted and “can’t fail” .. until it goes titsup.com and then the 
>>>> entire drive is lost and all the data on it..  it’s the double 
>>>> standard... advocate you need ECC ram, multiple vdevs etc, then 
>>>> single drive it.. sorry.. which one is it? Gaaaaaarrrrrrrgggghhhhhhh!
>>>
>>>
>>> As people have pointed out elsewhere in this thread, it's false to 
>>> claim that ZFS is unsafe on consumer hardware.  It's no less safe 
>>> than UFS on single-disk setups.
>>>
>>> Because anecdote is not evidence, I will refrain from saying, "I've 
>>> lost far more data on UFS than I have on ZFS (especially when SUJ 
>>> was shaking out its bugs)..." >;-)
>>>
>>> What I will agree with is that, probably due to its relative youth, 
>>> ZFS has less forensics/data recovery tools than UFS. I'm sure this 
>>> will improve as time goes on.  (I even posted a link to an article 
>>> describing someone adding ZFS support to a forensics toolkit earlier 
>>> in this thread.)
>>
>> The problem I see with that statement is that the zfs dev mailing 
>> lists constantly and consistently following the line of, the data is 
>> always right there is no need for a “fsck” (which I actually get) but 
>> it’s used to shut down every thread... the irony is I’m now 
>> installing windows 7 and SP1 on a usb stick (well it’s actually 
>> installed, but sp1 isn’t finished yet) so I can install a zfs data 
>> recovery tool which reports to be able to “walk the data” to retrieve 
>> all the files...  the irony eh... install windows7 on a usb stick to 
>> recover a FreeBSD installed zfs filesystem...  will let you know if 
>> the tool works, but as it was recommended by a dev I’m hopeful... 
>> have another array (with zfs I might add) loaded and ready to go... 
>> if the data recovery is successful I’ll blow away the original 
>> machine and work out what OS and drive setup will be safe for the 
>> data in the future.  I might even put FreeBSD and zfs back on it, but 
>> if I do it won’t be in the current Zraid2 config.
>
>
> There is no more irony in installing a data recovery tool to recover a 
> trashed ZFS pool than there is in installing one to recover a trashed 
> UFS file system.  No file system is bulletproof, which is why everyone 
> I know recommends a backup/disaster recovery strategy commensurate 
> with the value you place on your data. There WILL be some combination 
> of events that will lead to irretrievable data loss.  Your 
> extraordinary sequence of mishaps apparently met the threshold for ZFS 
> on your setup.
>
> I don't see how any of this leads to the conclusion that ZFS is 
> "dangerous" to use as a file system.

For me 'dangerous' threshold is when it comes to 'all or nothing'. UFS - 
even when trashed (and I might add I've never had it completely trashed 
on a production image) there are tools to recover what is left of the 
data.  There are no such tools for zfs (barring the one I'm about to 
test - which will be interesting to see if it works... but even then, 
installing windows to recover freebsd :D )

> What I believe is dangerous is relying on a post-mortem crash data 
> recovery methodology as a substitute for a backup strategy for data 
> that, in hindsight, is considered important enough to keep. No matter 
> how resilient ZFS or UFS may be, they are no substitute for backups 
> when it comes to data you care about.  (File system resiliency will 
> not protect you, e.g., from Ransomware or other malicious or 
> accidental acts of data destruction.)

True, but nothing is perfect, even  backups (how many times have we seen 
or heard of stories when Backups didn't actually work - and the problem 
was only identified when trying to recover from a problem?)

My situation has been made worse by the fact I was reorganising 
everything when it went down - so my backups (of the important stuff) 
were not there and that was a direct consequence of me throwing caution 
to the wind years before and stopping keeping the full mirror of the 
data... due to lack of space.  Interestingly have had another drive die 
in the array - and it doesn't just have one or two sectors down it has a 
*lot* - which was not noticed by the original machine - I moved the 
drive to a byte copier which is where it's reporting 100's of sectors 
damaged...  could this be compounded by zfs/mfi driver/hba not picking 
up errors like it should?

Michelle

-- 
Michelle Sullivan
http://www.mhix.org/



More information about the freebsd-stable mailing list