seeing data corruption with zfs trim functionality
Ajit Jain
ajit.jain at cloudbyte.com
Wed May 15 05:48:15 UTC 2013
Hi Steven,
Thanks for the follow-up.
The code where I pulled in zfs trim patches is not updated to 9 stable
specially the cam directory.
I pulled in many dependent patches in order to apply the patches that you
gave. After that all da devices
CAM_PERIPH_INVALID in dadone() because read capability was returning a very
big number (bigger than MAXPHYS)
for the block size. I think this is because I have not update the code to 9
stable (only pulled in required patches and miss
some patches).
So, I am planning to first update my code to 9stable and then try the same
test again. That might take some time.
thanks again,
ajit
On Wed, May 15, 2013 at 2:40 AM, Steven Hartland <killing at multiplay.co.uk>wrote:
> ----- Original Message ----- From: "Steven Hartland"
>
> What version are you porting the changes to?
>>>>
>>>> What SSD are you using?
>>>>
>>>> What LSI controller are you using?
>>>>
>>>
>>> I'd also like to see "zpool status" (for every pool that involves this
>>> SSD) and "gpart show" against the disk itself.
>>>
>>
>> Also:
>> 1. What FW version is your LSI? You can get this from dmesg.
>> 2. The exact command line your running iotest with?
>>
>
> Any update on this? I'd like to try and replicate your test here so
> would appreciate as much information as possible.
>
>
> Regards
> Steve
>
> ==============================**==================
> This e.mail is private and confidential between Multiplay (UK) Ltd. and
> the person or entity to whom it is addressed. In the event of misdirection,
> the recipient is prohibited from using, copying, printing or otherwise
> disseminating it or any information contained in it.
> In the event of misdirection, illegible or incomplete transmission please
> telephone +44 845 868 1337
> or return the E.mail to postmaster at multiplay.co.uk.
>
>
More information about the freebsd-fs
mailing list