seeing data corruption with zfs trim functionality

Ajit Jain ajit.jain at cloudbyte.com
Wed May 15 08:30:22 UTC 2013


Hi Steve,

One more thing I am not seeing data corruption with SATA SSD (kingston).
The issue was seen on SAS SSD i.e. Seagate PULSAR ST100FM0002 .

regards,
ajit


On Wed, May 15, 2013 at 1:49 PM, Ajit Jain <ajit.jain at cloudbyte.com> wrote:

> Hi Steven,
>
> Please find the tar ball of src code and binary of the test utility
> attached with the mail.
> Steps of test:
> 1. Enable zfs trim in /boot/loader.conf (vfs.zfs.trim_disable=0)
> 2. Set the delete method of the ssd device as UNMAP or WS16.
> 3. Create a pool (and optionally dataset) on the device.
> 4. Run iotest utility with thread count 10 (-t option) file size as at
> least 5GB
>     Second to execute as at least 500 sec (-T option) and write as 100% (W
> option).
>
> regards,
> ajit
>
>
> On Wed, May 15, 2013 at 12:56 PM, Steven Hartland <killing at multiplay.co.uk
> > wrote:
>
>> Could you provide us with details on the tests your using so we can
>> run them here on current sources and see if we see any issues?
>>
>>    Regards
>>    Steve
>>
>> ----- Original Message ----- From: "Ajit Jain" <ajit.jain at cloudbyte.com>
>> To: "Steven Hartland" <killing at multiplay.co.uk>
>> Cc: "freebsd-fs" <freebsd-fs at freebsd.org>
>> Sent: Wednesday, May 15, 2013 6:47 AM
>>
>> Subject: Re: seeing data corruption with zfs trim functionality
>>
>>
>>  Hi Steven,
>>>
>>> Thanks for the follow-up.
>>> The code where I pulled in zfs trim patches is not updated to 9 stable
>>> specially the cam directory.
>>> I pulled in many dependent patches in order to apply the patches that you
>>> gave. After that all da devices
>>> CAM_PERIPH_INVALID in dadone() because read capability was returning a
>>> very
>>> big number (bigger than MAXPHYS)
>>> for the block size. I think this is because I have not update the code
>>> to 9
>>> stable (only pulled in required patches and miss
>>> some patches).
>>>
>>> So, I am planning to first update my code to 9stable and then try the
>>> same
>>> test again. That might take some time.
>>>
>>>
>>> thanks again,
>>> ajit
>>>
>>>
>>> On Wed, May 15, 2013 at 2:40 AM, Steven Hartland <
>>> killing at multiplay.co.uk>**wrote:
>>>
>>>  ----- Original Message ----- From: "Steven Hartland"
>>>>
>>>>  What version are you porting the changes to?
>>>>
>>>>>
>>>>>>> What SSD are you using?
>>>>>>>
>>>>>>> What LSI controller are you using?
>>>>>>>
>>>>>>>
>>>>>> I'd also like to see "zpool status" (for every pool that involves this
>>>>>> SSD) and "gpart show" against the disk itself.
>>>>>>
>>>>>>
>>>>> Also:
>>>>> 1. What FW version is your LSI? You can get this from dmesg.
>>>>> 2. The exact command line your running iotest with?
>>>>>
>>>>>
>>>> Any update on this? I'd like to try and replicate your test here so
>>>> would appreciate as much information as possible.
>>>>
>>>>
>>>>    Regards
>>>>    Steve
>>>>
>>>> ==============================****==================
>>>>
>>>> This e.mail is private and confidential between Multiplay (UK) Ltd. and
>>>> the person or entity to whom it is addressed. In the event of
>>>> misdirection,
>>>> the recipient is prohibited from using, copying, printing or otherwise
>>>> disseminating it or any information contained in it.
>>>> In the event of misdirection, illegible or incomplete transmission
>>>> please
>>>> telephone +44 845 868 1337
>>>> or return the E.mail to postmaster at multiplay.co.uk.
>>>>
>>>>
>>>>
>>>
>> ==============================**==================
>> This e.mail is private and confidential between Multiplay (UK) Ltd. and
>> the person or entity to whom it is addressed. In the event of misdirection,
>> the recipient is prohibited from using, copying, printing or otherwise
>> disseminating it or any information contained in it.
>> In the event of misdirection, illegible or incomplete transmission please
>> telephone +44 845 868 1337
>> or return the E.mail to postmaster at multiplay.co.uk.
>>
>>
>


More information about the freebsd-fs mailing list