Default inode number too low in FFS nowadays?
Daniel Kalchev
daniel at digsys.bg
Wed Nov 2 14:14:57 UTC 2011
On 02.11.11 15:48, Lee Dilkie wrote:
>
> On 11/2/2011 1:36 PM, Daniel Kalchev wrote:
>>
>>
>> On 02.11.11 15:13, Jeremy Chadwick wrote:
>>> On Wed, Nov 02, 2011 at 12:57:33PM +0100, Borja Marcos wrote:
>>>> Today I?ve come across an issue long ago forgotten :) Running out
>>>> of i-nodes.
>>>
>> Just for the completeness of it, one would use ZFS and be done with
>> this issue. :-)
>
> Are you suggesting that ZFS be the default FS?
Not really. Perhaps we might think about something like this in 10.0 or
11.0 -- today too many people are wary of ZFS and there are already
trivial ways to have ZFS-only FreeBSD install - so no need to hurry.
> My only concern with ZFS is that it still appears to be in flux and
> have some issues. I don't know, from monitoring this list, if those
> are issues that heavy load users experience and ZFS is as stable as
> UFS or if it isn't. I just know I see issues being raised.
>
Personally, I have two issues with ZFS: memory use and ... that it
exposes very quickly bad hardware. I am currently at something like ~85%
of my systems farm converted to ZFS-only. In the process, too many
components proved to be bad. Disks, that previously were 'wonderful',
display CRC errors in ZFS. Guess what --- these disks were happily
reading/writing garbage with UFS and nobody ever noticed!
This is a serious "issue" with going to ZFS .. that has me prompted to
convert any active system to use ZFS-only, although that would require
much more resources memory-wise.
Another issue I have with ZFS is that it is not (yet) trivial to use for
read-only installs, especially on the root. I have a multitude of
systems that mount all their 'system' partitions read-only (UFS) and
only data partitions are writable. I have yet to discover how one does
this with ZFS only.
Yet another issue, more pronounced with v28 than with v15 is that when
your zpool gets full, performance becomes abysmal. That is particularly
bad for systems that are nearly full most of the time --- easily fixable
with larger disks, I know..
Yet another issue with ZFS is that while the traditional UNIX
partittioning semantics has been local (such as, partittions a,b,c on
drive1 are different than partittions a,b,c on drive2), ZFS pool names
are global. You cannot have two 'system' pools on the same system and
that makes some historic habits difficult to apply. The same trouble is
with GEOM/GPT labels too, so we may just have to grow up.
Other than that, my experience with ZFS has been more than wonderful.
Daniel
More information about the freebsd-fs
mailing list