Areca Weirdness - UFS2 larger than 2Tb problem?

Lawrence Farr freebsd-smp at epcdirect.co.uk
Fri Nov 17 23:51:57 PST 2006


> > 
> > 
> > > Re-posting to -STABLE as it also does it on i386.
> > >
> > > I reinstalled i386 stable as of yesterday, and newfs'd all 
> > the partitions
> > > "just in case". I got it to crash while doing a mkdir on the areca
> > > partition, so set up crash dumps on the boot drive (it 
> > boots off a single
> > > ATA disk, the Areca is additional storage) and it died 
> again running
> > > the periodic scripts last night. The info file from the 
> dump shows:
> > >
> > > Dump header from device /dev/ad0s1b
> > >  Architecture: i386
> > >  Architecture Version: 2
> > >  Dump Length: 2145452032B (2046 MB)
> > >  Blocksize: 512
> > >  Dumptime: Thu Nov 16 03:01:09 2006
> > >  Hostname: nas-2.shorewood-epc.co.uk
> > >  Magic: FreeBSD Kernel Dump
> > >  Version String: FreeBSD 6.1-20061115 #0: Wed Nov 15 
> > 04:18:11 UTC 2006
> > >    root at monitor.shorewood-epc.co.uk:/usr/obj/usr/src/sys/SMP
> > >  Panic String: ufs_dirbad: bad dir
> > >  Dump Parity: 632980830
> > >  Bounds: 0
> > >  Dump Status: good
> > >
> > > Am I expecting too much with partitions over 2Tb? I've 
> > never gone over
> > > 2Tb before, so havent come across any issues like this.
> > >
> > >> -----Original Message-----
> > >> From: owner-freebsd-amd64 at freebsd.org
> > >> [mailto:owner-freebsd-amd64 at freebsd.org] On Behalf Of 
> Lawrence Farr
> > >> Sent: 10 November 2006 11:39
> > >> To: freebsd-amd64 at freebsd.org
> > >> Subject: Areca Weirdness
> > >>
> > >> I've got an Areca 12 port card running a 6Tb array which 
> is divided
> > >> into 2.1Tb chunks at the moment, as it was doing the same with a
> > >> single 6Tb partition.
> > >>
> > >> ad0: 58644MB <IC35L060AVER07 0 ER6OA44A> at ata0-master UDMA100
> > >> da0 at arcmsr0 bus 0 target 0 lun 0
> > >> da0: <Areca Arc1 R001> Fixed Direct Access SCSI-3 device
> > >> da0: 166.666MB/s transfers (83.333MHz, offset 32, 16bit),
> > >> Tagged Queueing
> > >> Enabled
> > >> da0: 2224922MB (4556640256 512 byte sectors: 255H 63S/T 283637C)
> > >>
> > >> If I newfs it, and copy data to it, I have no problem initially.
> > >> If I then try and copy the data on the disk already to a new
> > >> folder, the machine reboots (it's a remote host with no serial
> > >> attached currently). When it comes back to life, it mounts, and
> > >> shows as:
> > >>
> > >> /dev/da0       2.1T    343G    1.6T    18%    /usr/home/areca1
> > >>
> > >> But is completely empty. Unmounting it and trying to fsck it
> > >> errors, as does mounting it by hand.
> > >>
> > >> [root at nas-2 /home]# fsck -y /dev/da0
> > >> ** /dev/da0
> > >> Cannot find file system superblock
> > >> ioctl (GCINFO): Inappropriate ioctl for device
> > >> fsck_ufs: /dev/da0: can't read disk label
> > >> [root at nas-2 /home]# mount /dev/da0
> > >> mount: /dev/da0 on /usr/home/areca1: incorrect super block
> > >>
> > >> Are there any known issues with the driver on AMD64? I had
> > >> major issues with it on Linux/386 with large memory support
> > >> (it would behave equally strangely) that went away when I
> > >> took large memory support out, maybe there are some non 64
> > >> bit safe parts common to both?
> > 
> > I have the Areca 8 port PCI-X card. 2 arrays of 1.25T each 
> > and no issues 
> > yet. I've been using it on 5.x for a year and now on 6.x it's 
> > perfect too.
> > Have you updated the card to the latest firmware ?
> > 
> > I've just done a test copy from the one volume to the other 
> > of 10 gigs. It 
> > ran at over 80M/s and tok under 2 minutes with no errors.
> > 
> > Did you follow the instructions provided with the Areca card 
> > for creating 
> > volumes over 2TB ? There are some things you have to do so 
> > that the OS works 
> > correctly with it.
> > 
> > -Clay
> 
> I'm not convinced it's an Areca problem anymore, as I've copied 
> 300 or so Gb on now, but it will randomly corrupt the fs and
> reboot itself. Background fsck will not fix it, but a manual one
> does. I'm going to drop the partitions below 2Tb and try the test
> again to try and eliminate any hardware/driver issues.


This has now paniced while idle, and with only a 1.8Tb fs mounted.
Didn't get a core, but got this:

Nov 17 19:43:39 nas-2 savecore: reboot after panic: ffs_valloc: dup alloc
Nov 17 19:43:39 nas-2 savecore: no dump, not enough free space on device
(661322 available, need 2095170)
Nov 17 19:43:39 nas-2 savecore: unsaved dumps found but not saved

Anyone got any ideas where to start looking?



More information about the freebsd-stable mailing list