Hard drive fullness limits information help request

Micheal Patterson micheal at tsgincorporated.com
Mon Apr 11 14:09:11 PDT 2005



----- Original Message ----- 
From: "NMH" <drumslayer2 at yahoo.com>
To: "hardware" <freebsd-hardware at freebsd.org>; "questions"
<freebsd-questions at freebsd.org>
Sent: Monday, April 11, 2005 2:30 PM
Subject: Hard drive fullness limits information help request


> Hi all
>   I know hard drives tend to not run well when near
> full. They have trouble performing self adjustments
> (hardware), self defragging(unix/FFS) etc.. (as I can
> express it) However, I need to find some documentation
> or some help in explaining this better.
>   I am working with some people who store loads of
> files, on many drives and tend to fill the drives to
> 95% and more and then can't understand why they become
> unstable.  I need to be able to explain it better and
> I would also like to know more to be able to
> factually/sanely set a percent full safe limit.
>
>  Any help would be appreciatted
>
>  Thanks!
>
>  NMH.
>
>
>
> The Large Print Giveth And The Small Print Taketh Away
>  -- Anon


NMH,

If these people are old enough to remember LP records, explain it to them in
this fashion.

A hard drive is much like an older LP record. Multiple songs, in sequencial
order. You can play them in any order that you wish by moving the tone arm
to a different song on the album. Now, say that you don't like track 3 and
wish to delete it (if you could). You would end up with 3 minutes of blank
space in the album. So, you want to add another song that you do like, but
it's 3 minutes 30 seconds long and won't fit into a 3 minute time slot. A
hard drive is able to place this 30 seconds at the end of the current space
and be able to jump to that 30 extra seconds and you never know the
difference. Now, if this happens a lot, meaning removing data, adding larger
data, removing data, adding smaller chunks of data, etc, the actual data
will get scattered throughout the disk. This is known as data fragmentation.
Hard drives are able to deal with to a considerable degree however the more
fragmented a drive is,  the harder the drive has to work in order to make
that unnoticed jump. As the drive works harder, access times grow longer and
there is a higher potential for data loss. When drives get to a higher usage
(90%+ utilization), there isn't much room to left to handle those scattered
chuncks of data.

That's the analogy that I used to use and it worked pretty well for me. Your
mileage may vary.

--

Micheal Patterson
Senior Communications Systems Engineer
405-917-0600

Confidentiality Notice:  This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential and privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all
copies of the original message.



More information about the freebsd-questions mailing list