ZFS
Wilko Bulte
wb at freebie.xs4all.nl
Thu Sep 16 14:22:36 PDT 2004
On Thu, Sep 16, 2004 at 09:18:37PM +0000, Kris Kennaway wrote..
> On Thu, Sep 16, 2004 at 10:31:57AM -0500, Sam wrote:
>
> > >CERN's LHC is expected to produce 10-15 PB/year. e-science ("the grid")
> > >is capable of producing whopping huge data sets, and people already are.
> > >Many aspects of data custodianship are still open questions, but there's
> > >little doubt that what's cutting-edge storage today will be in
> > >filesystems between now and 10 years' time. Filesystem views on data
> > >sets that are physically stored and replicated at disparate locations
> > >around the planet are the kind of things that potentially need larger
> > >than 64-bit quantities.
> > >
> >
> > Let's suppose you generate an exabyte of storage per year. Filling a
> > 64-bit filesystem would take you approximately 8 million years.
> >
> > I'm not saying we'll never get there, just that doing it now is nothing
> > more than a "look at us, ain't we forward thinking" ploy. It's a
> > _single filesystem_. If you want another 8192 ZB, just make another.
>
> The detectors in the particle accelerator at Fermilab produce raw data
> at a rate of 100 TB/sec (yes, 100 terabytes per second). They have to
> use a three-tiered system of hardware filters to throw away most of
> this and try to pick out the events that might actually be
> interesting, to get it down to a "slow" data rate of 100 MB/sec that
> can actually be written out to storage. If the hardware and software
100MB/s is slow, I think this number is wrong.
--
Wilko Bulte wilko at FreeBSD.org
More information about the freebsd-hackers
mailing list