Growing large UFS over 16TB?
Tobias Fredriksson
tobfr108 at gmail.com
Tue Oct 12 10:36:25 UTC 2010
Yes,
Unfortunately I can't move the data from UFS to ZFS as I have no other array available with enough space.
System information if that helps
FreeBSD stor1.vmlocal.lan 7.0-RELEASE amd64
8GB ram
3ware 9650SE-24M8
12 okt 2010 kl. 12.09 skrev Torbjorn Kristoffersen:
> On Mon, Oct 11, 2010 at 10:18 PM, Tobias Fredriksson <tobfr108 at gmail.com> wrote:
>> Hi,
>>
>> So I have this UFS, unfortunately it was just done newfs /dev/da1 with no regards to installing GPT or anything like that.
>>
>> It started out as a 6x2TB system raid, then we expanded with 2x2TB.
>>
>> Growing this filesystem was not possible with the current tool however with the patch from http://masq.tychl.net/growfs.patch it worked fine.
>>
>> Now that I'm trying to grow it from 12TB to closer to 20TB it fails after about 15 minutes.
>> It displays all the super-block backups all the way to "39061769312" then it sits there reading from the volume for about 5-10 minutes.
>>
>> The following error message is then output
>> growfs: rdfs: attempting to read negative block number: Inappropriate ioctl for device
>>
>> I understand the reason for this, its trying to read a block and the integer just wrapped around. Nice.
>> The relevant lines from growfs.c are
>>
>> static void
>> rdfs(ufs2_daddr_t bno, size_t size, void *bf, int fsi)
>> {
>> [...]
>> if (bno < 0) {
>> err(32, "rdfs: attempting to read negative block number");
>> }
>> [...]
>>
>> Just for fun I commented the if part out and recompiled.
>> growfs: rdfs: read error: -4889807711788704476: Input/output error
>>
>> The only place that ufs2_daddr_t is defined is in /usr/include/ufs/ufs/dinode.h
>> typedef int64_t ufs2_daddr_t;
>>
>> So again for fun I changed this to u_int64_t. I also removed the comments on that if part in growfs.c
>>
>> This caused the same message as last to be repeated. But not the negative number.
>> growfs: rdfs: read error: -4889807711788704476: Input/output error
>>
>> This leads me to believe that I'm at least doing something partially right.
>>
>> The next part that growfs.c is doing in rdfs is
>> n = read(fsi, bf, size);
>> if (n != (ssize_t)size) {
>> err(34, "rdfs: read error: %jd", (intmax_t)bno);
>> }
>>
>> So I changed
>> "rdfs(ufs2_daddr_t bno, size_t size, void *bf, int fsi)"
>> to
>> "rdfs(ufs2_daddr_t bno, size_t size, void *bf, u_int64_t fsi)"
>> However this changed nothing. Same output.
>> Since its failing out at the if sentance then of course the problem is in ssize_t not being big enough.
>> As such I of course changed the values for this in /usr/include/machine/_types.h and I also checked out _limit.h in the same dir.
>> _types.h
>> typedef __int64_t __ssize_t;
>> to
>> typedef __uint64_t __ssize_t;
>>
>> _limit.h
>> #define __SSIZE_MAX __LONG_MAX /* max value for a ssize_t */
>> #define __SIZE_T_MAX __ULONG_MAX /* max value for a size_t */
>> to
>> #define __SSIZE_MAX __ULLONG_MAX /* max value for a ssize_t */
>> #define __SIZE_T_MAX __ULLONG_MAX /* max value for a size_t */
>>
>> However this failed to make any sort of change. I returned all of the later values as nothing helped.
>>
>> So I'm turning to the fs gurus. At the moment I have no way of moving the data off and creating this properly. As such changing to another fs is not an option right now neither.
>>
>> So if anybody has any suggestions on what to do to temporarily fix the issue until we can move the data of the raid and rebuild it properly, please let me know.
>>
>> [root at stor1 /usr/src/sbin/growfs]# ./growfs /dev/da1
>> We strongly recommend you to make a backup before growing the Filesystem
>>
>> Did you backup your data (Yes/No) ? Yes
>> new file systemsize is: 9765570560 frags
>> Warning: 136832 sector(s) cannot be allocated.
>> growfs: 19073314.0MB (39062145408 sectors) block size 16384, fragment size 2048
>> using 103818 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
>> [...]
>> growfs: rdfs: read error: -4889807711788704476: Input/output error
>>
>
> Unfortunately I can't answer your question, but have you considered using ZFS?
More information about the freebsd-fs
mailing list