kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix [SB QUAR: Fri Mar 21 16:01:12 2014]

mikej mikej at mikej.com
Mon Mar 24 23:02:22 UTC 2014


Karl,

I don't know why the PR system doesn't allow multiple attachments for 
patches
to be sent so they can easily be downloaded.  I continue to be 
unsuccessful
at getting patches out of email that don't have line wrap are tab 
issues. So,
again if you could send a link or post the latest patch I would be glad 
to let
it churn.

If I need an education about the PR system, please someone help me.

That last patch you sent me privately applied like this against 
r263618M.

Hmm...  Looks like a new-style context diff to me...
The text leading up to this was:
--------------------------
|*** arc.c.original     Thu Mar 13 09:18:48 2014
|--- arc.c      Fri Mar 21 09:04:23 2014
--------------------------
Patching file arc.c using Plan A...
Hunk #1 succeeded at 18.
Hunk #2 succeeded at 210.
Hunk #3 succeeded at 281.
Hunk #4 succeeded at 2539.
Hunk #5 succeeded at 2582.
Hunk #6 succeeded at 2599.
done
[root at charon /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs]#

but failed to build.

I hope others chime in but so far for the updates from these patches 
have
been positive for me understanding though that everyone's workloads 
vary.

Unless I knowingly drive the system into resource starvation everything 
is
much better in not holding wired and not driving swap.

My main background resource hogs are poudriere a random load.

Regards,

--mikej


On 2014-03-24 11:21, Karl Denninger wrote:
> Mike;
>
> Did the patch apply cleanly?
>
> That declaration is in <sys/vmmeter.h> which should be included up
> near the top of the file if NEWRECLAIM is defined and the patch
> applied ok.
>
> See here, last entry (that's the most-recent rev), revert to the
> stock arc.c (or put arc.c.orig back, which should be the original
> file) and re-apply it.
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=187594
>
> Lastly, what OS rev are you running?  The patch is against 10-STABLE;
> it is ok against both the current checked-in rev of arc.c and the
> previous (prior to the new feature flags being added a week or so 
> ago)
> rev back.
>
> It sounds like the include didn't get applied.
>
> On 3/24/2014 10:00 AM, mikej wrote:
>> Karl,
>>
>> Not being a C coder it appears a declaration is missing.
>>
>> --- arc.o ---
>> 
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2614:15: 
>> error: use of undeclared identifier 'cnt'
>>                 freepages = cnt.v_free_target - (cnt.v_free_target / 
>> 33);
>>
>> Thanks again,
>> Michael Jung
>>
>>
>> On 2014-03-21 20:05, Karl Denninger wrote:
>>> Here 'ya go...
>>>
>>>  Please keep me posted (the list is best as the more public 
>>> commentary
>>> the better, and if this needs more tuning that's the way to find 
>>> out!)
>>> on how it works for you.
>>>
>>>  I have it in production at this point and am happy with it -- the
>>> current default is at the pager "wakeup" level less 3%, but it of
>>> course can be tuned manually.
>>>
>>> On 3/21/2014 3:59 PM, mikej wrote:
>>>
>>>> Karl,
>>>>
>>>> I've looked at my raw mailbox and something is trashing tabs and
>>>> line
>>>> length for your more recent patches in email.
>>>>
>>>> I did not see any attachments, nor updates to the PR for download 
>>>> -
>>>> would
>>>> you mind sending me the latest patch as an attachment?
>>>>
>>>> Thanks for your work, I believe this is going to add real 
>>>> stability
>>>>
>>>> without having to set vfs.zfs.arc_max and other tunables.
>>>>
>>>> Kind regards,
>>>>
>>>> Michael Jung
>>>>
>>>> On 2014-03-20 13:10, Karl Denninger wrote:
>>>>
>>>>> The following reply was made to PR kern/187594; it has been noted
>>>>> by GNATS.
>>>>>
>>>>> From: Karl Denninger [1]
>>>>> To: bug-followup at FreeBSD.org [2], karl at fs.denninger.net [3]
>>>>> Cc:
>>>>> Subject: Re: kern/187594: [zfs] [patch] ZFS ARC behavior problem
>>>>> and fix
>>>>> Date: Thu, 20 Mar 2014 12:00:54 -0500
>>>>>
>>>>>  This is a cryptographically signed message in MIME format.
>>>>>
>>>>>  --------------ms010508000607000909070805
>>>>>  Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>>>  Content-Transfer-Encoding: quoted-printable
>>>>>
>>>>>  Responsive to avg's comment and with another overnight and
>>>>> daytime load=20
>>>>>  of testing on multiple machines with varying memory configs
>>>>> from 4-24GB=20
>>>>>  of RAM here is another version of the patch.
>>>>>
>>>>>  The differences are:
>>>>>
>>>>>  1. No longer use kernel_sysctlbyname, include the VM header
>>>>> file and get =
>>>>>
>>>>>  the values directly (less overhead.)  Remove the variables no
>>>>> longer need=
>>>>>  ed.
>>>>>
>>>>>  2. Set the default free RAM level for ARC shrinkage to
>>>>> v_free_target=20
>>>>>  less 3% as I was able to provoke a stall once with it set to a
>>>>> 5%=20
>>>>>  reservation, was able to provoke it with the parameter set to
>>>>> 10% with a =
>>>>>
>>>>>  lot of work and was able to do so "on demand" with it set to
>>>>> 20%.  With=20
>>>>>  a 5% invasion initiating a scrub with very heavy I/O and image
>>>>> load=20
>>>>>  (hundreds of web and database processes) provoked a ~10 second
>>>>> system=20
>>>>>  stall.  With it set to 3% I have not been able to reproduce
>>>>> the stall=20
>>>>>  yet the inactive page count remains stable even under extremely
>>>>> heavy=20
>>>>>  load, indicating that page-stealing remains effective when
>>>>> required. =20
>>>>>  Note that for my workload even with this level set above
>>>>> v_free_target,=20
>>>>>  which would imply no page stealing by the VM system before ARC
>>>>> expansion =
>>>>>
>>>>>  is halted, I do not get unbridled inactive page growth.
>>>>>
>>>>>  As before vfs.zfs.zrc_freepages and
>>>>> vfs.zfs.arc_freepage_percent remain=20
>>>>>  as accessible knobs if you wish to twist them for some reason
>>>>> to=20
>>>>>  compensate for an unusual load profile or machine
>>>>> configuration.
>>>>>
>>>>>  *** arc.c.original    Thu Mar 13 09:18:48 2014
>>>>>  --- arc.c    Thu Mar 20 11:51:48 2014
>>>>>  ***************
>>>>>  *** 18,23 ****
>>>>>  --- 18,94 ----
>>>>>      *
>>>>>      * CDDL HEADER END
>>>>>      */
>>>>>  +
>>>>>  + /* Karl Denninger (karl at denninger.net [4]), 3/20/2014,
>>>>> FreeBSD-specific
>>>>>  +  *
>>>>>  +  * If "NEWRECLAIM" is defined, change the "low memory"
>>>>> warning that cau=
>>>>>  ses
>>>>>  +  * the ARC cache to be pared down.  The reason for the
>>>>> change is that t=
>>>>>  he
>>>>>  +  * apparent attempted algorithm is to start evicting ARC
>>>>> cache when fre=
>>>>>  e
>>>>>  +  * pages fall below 25% of installed RAM.  This maps
>>>>> reasonably well to=
>>>>>   how
>>>>>  +  * Solaris is documented to behave; when "lotsfree" is
>>>>> invaded ZFS is t=
>>>>>  old
>>>>>  +  * to pare down.
>>>>>  +  *
>>>>>  +  * The problem is that on FreeBSD machines the system
>>>>> doesn't appear to=
>>>>>   be
>>>>>  +  * getting what the authors of the original code thought
>>>>> they were look=
>>>>>  ing at
>>>>>  +  * with its test -- or at least not what Solaris did -- and
>>>>> as a result=
>>>>>   that
>>>>>  +  * test never triggers.  That leaves the only reclaim
>>>>> trigger as the "p=
>>>>>  aging
>>>>>  +  * needed" status flag, and by the time * that trips the
>>>>> system is alre=
>>>>>  ady
>>>>>  +  * in low-memory trouble.  This can lead to severe
>>>>> pathological behavio=
>>>>>  r
>>>>>  +  * under the following scenario:
>>>>>  +  * - The system starts to page and ARC is evicted.
>>>>>  +  * - The system stops paging as ARC's eviction drops wired
>>>>> RAM a bit.
>>>>>  +  * - ARC starts increasing its allocation again, and wired
>>>>> memory grows=
>>>>>  =2E
>>>>>  +  * - A new image is activated, and the system once again
>>>>> attempts to pa=
>>>>>  ge.
>>>>>  +  * - ARC starts to be evicted again.
>>>>>  +  * - Back to #2
>>>>>  +  *
>>>>>  +  * Note that ZFS's ARC default (unless you override it in
>>>>> /boot/loader.=
>>>>>  conf)
>>>>>  +  * is to allow the ARC cache to grab nearly all of free RAM,
>>>>> provided n=
>>>>>  obody
>>>>>  +  * else needs it.  That would be ok if we evicted cache
>>>>> when required.
>>>>>  +  *
>>>>>  +  * Unfortunately the system can get into a state where it
>>>>> never
>>>>>  +  * manages to page anything of materiality back in, as if
>>>>> there is acti=
>>>>>  ve
>>>>>  +  * I/O the ARC will start grabbing space once again as soon
>>>>> as the memo=
>>>>>  ry
>>>>>  +  * contention state drops.  For this reason the "paging is
>>>>> occurring" f=
>>>>>  lag
>>>>>  +  * should be the **last resort** condition for ARC eviction;
>>>>> you want t=
>>>>>  o
>>>>>  +  * (as Solaris does) start when there is material free RAM
>>>>> left BUT the=
>>>>>
>>>>>  +  * vm system thinks it needs to be active to steal pages
>>>>> back in the at=
>>>>>  tempt
>>>>>  +  * to never get into the condition where you're potentially
>>>>> paging off
>>>>>  +  * executables in favor of leaving disk cache allocated.
>>>>>  +  *
>>>>>  +  * To fix this we change how we look at low memory,
>>>>> declaring two new
>>>>>  +  * runtime tunables.
>>>>>  +  *
>>>>>  +  * The new sysctls are:
>>>>>  +  * vfs.zfs.arc_freepages (free pages required to call RAM
>>>>> "sufficient")=
>>>>>
>>>>>  +  * vfs.zfs.arc_freepage_percent (additional reservation
>>>>> percentage, def=
>>>>>  ault 0)
>>>>>  +  *
>>>>>  +  * vfs.zfs.arc_freepages is initialized from
>>>>> vm.v_free_target, less 3%.=
>>>>>
>>>>>  +  * This should insure that we allow the VM system to steal
>>>>> pages first,=
>>>>>
>>>>>  +  * but pare the cache before we suspend processes attempting
>>>>> to get mor=
>>>>>  e
>>>>>  +  * memory, thereby avoiding "stalls."  You can set this
>>>>> higher if you w=
>>>>>  ish,
>>>>>  +  * or force a specific percentage reservation as well, but
>>>>> doing so may=
>>>>>
>>>>>  +  * cause the cache to pare back while the VM system remains
>>>>> willing to
>>>>>  +  * allow "inactive" pages to accumulate.  The challenge is
>>>>> that image
>>>>>  +  * activation can force things into the page space on a
>>>>> repeated basis
>>>>>  +  * if you allow this level to be too small (the above
>>>>> pathological
>>>>>  +  * behavior); the defaults should avoid that behavior but
>>>>> the sysctls
>>>>>  +  * are exposed should your workload require adjustment.
>>>>>  +  *
>>>>>  +  * If we're using this check for low memory we are replacing
>>>>> the previo=
>>>>>  us
>>>>>  +  * ones, including the oddball "random" reclaim that appears
>>>>> to fire fa=
>>>>>  r
>>>>>  +  * more often than it should.  We still trigger if the
>>>>> system pages.
>>>>>  +  *
>>>>>  +  * If you turn on NEWRECLAIM_DEBUG then the kernel will
>>>>> print on the co=
>>>>>  nsole
>>>>>  +  * status messages when the reclaim status trips on and off,
>>>>> along with=
>>>>>   the
>>>>>  +  * page count aggregate that triggered it (and the free
>>>>> space) for each=
>>>>>
>>>>>  +  * event.
>>>>>  +  */
>>>>>  +
>>>>>  + #define    NEWRECLAIM
>>>>>  + #undef    NEWRECLAIM_DEBUG
>>>>>  +
>>>>>  +
>>>>>     /*
>>>>>      * Copyright (c) 2005, 2010, Oracle and/or its
>>>>> affiliates. All rights =
>>>>>  reserved.
>>>>>      * Copyright (c) 2013 by Delphix. All rights reserved.
>>>>>  ***************
>>>>>  *** 139,144 ****
>>>>>  --- 210,222 ----
>>>>>    =20
>>>>>     #include
>>>>>    =20
>>>>>  + #ifdef    NEWRECLAIM
>>>>>  + #ifdef    __FreeBSD__
>>>>>  + #include
>>>>>  + #include
>>>>>  + #endif
>>>>>  + #endif    /* NEWRECLAIM */
>>>>>  +
>>>>>     #ifdef illumos
>>>>>     #ifndef _KERNEL
>>>>>     /* set with ZFS_DEBUG=3Dwatch, to enable watchpoints on
>>>>> frozen buffers=
>>>>>   */
>>>>>  ***************
>>>>>  *** 203,218 ****
>>>>>  --- 281,316 ----
>>>>>     int zfs_arc_shrink_shift =3D 0;
>>>>>     int zfs_arc_p_min_shift =3D 0;
>>>>>     int zfs_disable_dup_eviction =3D 0;
>>>>>  + #ifdef    NEWRECLAIM
>>>>>  + #ifdef  __FreeBSD__
>>>>>  + static    int freepages =3D 0;    /* This much memory
>>>>> is considered critical =
>>>>>  */
>>>>>  + static    int percent_target =3D 0;    /* Additionally
>>>>> reserve "X" percent fr=
>>>>>  ee RAM */
>>>>>  + #endif    /* __FreeBSD__ */
>>>>>  + #endif    /* NEWRECLAIM */
>>>>>    =20
>>>>>     TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max);
>>>>>     TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min);
>>>>>     TUNABLE_QUAD("vfs.zfs.arc_meta_limit",
>>>>> &zfs_arc_meta_limit);
>>>>>  + #ifdef    NEWRECLAIM
>>>>>  + #ifdef  __FreeBSD__
>>>>>  + TUNABLE_INT("vfs.zfs.arc_freepages", &freepages);
>>>>>  + TUNABLE_INT("vfs.zfs.arc_freepage_percent", &percent_target);
>>>>>
>>>>>  + #endif    /* __FreeBSD__ */
>>>>>  + #endif    /* NEWRECLAIM */
>>>>>  +
>>>>>     SYSCTL_DECL(_vfs_zfs);
>>>>>     SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN,
>>>>> &zfs_arc_max,=
>>>>>   0,
>>>>>         "Maximum ARC size");
>>>>>     SYSCTL_UQUAD(_vfs_zfs, OID_AUTO, arc_min, CTLFLAG_RDTUN,
>>>>> &zfs_arc_min,=
>>>>>   0,
>>>>>         "Minimum ARC size");
>>>>>    =20
>>>>>  + #ifdef    NEWRECLAIM
>>>>>  + #ifdef  __FreeBSD__
>>>>>  + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepages, CTLFLAG_RWTUN,
>>>>> &freepages=
>>>>>  , 0, "ARC Free RAM Pages Required");
>>>>>  + SYSCTL_INT(_vfs_zfs, OID_AUTO, arc_freepage_percent,
>>>>> CTLFLAG_RWTUN, &pe=
>>>>>  rcent_target, 0, "ARC Free RAM Target percentage");
>>>>>  + #endif    /* __FreeBSD__ */
>>>>>  + #endif    /* NEWRECLAIM */
>>>>>  +
>>>>>     /*
>>>>>      * Note that buffers can be in one of 6 states:
>>>>>      *    ARC_anon    - anonymous (discussed below)
>>>>>  ***************
>>>>>  *** 2438,2443 ****
>>>>>  --- 2536,2546 ----
>>>>>     {
>>>>>    =20
>>>>>     #ifdef _KERNEL
>>>>>  + #ifdef    NEWRECLAIM_DEBUG
>>>>>  +     static    int    xval =3D -1;
>>>>>  +     static    int    oldpercent =3D 0;
>>>>>  +     static    int    oldfreepages =3D 0;
>>>>>  + #endif    /* NEWRECLAIM_DEBUG */
>>>>>    =20
>>>>>         if (needfree)
>>>>>             return (1);
>>>>>  ***************
>>>>>  *** 2476,2481 ****
>>>>>  --- 2579,2585 ----
>>>>>             return (1);
>>>>>    =20
>>>>>     #if defined(__i386)
>>>>>  +
>>>>>         /*
>>>>>          * If we're on an i386 platform, it's possible
>>>>> that we'll exhaust the=
>>>>>
>>>>>          * kernel heap space before we ever run out of
>>>>> available physical
>>>>>  ***************
>>>>>  *** 2492,2502 ****
>>>>>             return (1);
>>>>>     #endif
>>>>>     #else    /* !sun */
>>>>>         if (kmem_used() > (kmem_size() * 3) / 4)
>>>>>             return (1);
>>>>>     #endif    /* sun */
>>>>>    =20
>>>>>  - #else
>>>>>         if (spa_get_random(100) =3D=3D 0)
>>>>>             return (1);
>>>>>     #endif
>>>>>  --- 2596,2658 ----
>>>>>             return (1);
>>>>>     #endif
>>>>>     #else    /* !sun */
>>>>>  +
>>>>>  + #ifdef    NEWRECLAIM
>>>>>  + #ifdef  __FreeBSD__
>>>>>  + /*
>>>>>  +  * Implement the new tunable free RAM algorithm.  We check
>>>>> the free pag=
>>>>>  es
>>>>>  +  * against the minimum specified target and the percentage
>>>>> that should =
>>>>>  be
>>>>>  +  * free.  If we're low we ask for ARC cache shrinkage. If
>>>>> this is defi=
>>>>>  ned
>>>>>  +  * on a FreeBSD system the older checks are not performed.
>>>>>  +  *
>>>>>  +  * Check first to see if we need to init freepages, then
>>>>> test.
>>>>>  +  */
>>>>>  +     if (!freepages) {        /* If zero then
>>>>> (re)init */
>>>>>  +         freepages =3D cnt.v_free_target -
>>>>> (cnt.v_free_target / 33);
>>>>>  + #ifdef    NEWRECLAIM_DEBUG
>>>>>  +         printf("ZFS ARC: Default
>>>>> vfs.zfs.arc_freepages to [%u] [%u less 3%%]=
>>>>>  n", freepages, cnt.v_free_target);
>>>>>  + #endif    /* NEWRECLAIM_DEBUG */
>>>>>  +     }
>>>>>  + #ifdef    NEWRECLAIM_DEBUG
>>>>>  +     if (percent_target !=3D oldpercent) {
>>>>>  +         printf("ZFS ARC: Reservation percent change
>>>>> to [%d], [%d] pages, [%d]=
>>>>>   freen", percent_target, cnt.v_page_count, cnt.v_free_count);
>>>>>  +         oldpercent =3D percent_target;
>>>>>  +     }
>>>>>  +     if (freepages !=3D oldfreepages) {
>>>>>  +         printf("ZFS ARC: Low RAM page change to [%d],
>>>>> [%d] pages, [%d] freen=
>>>>>  ", freepages, cnt.v_page_count, cnt.v_free_count);
>>>>>  +         oldfreepages =3D freepages;
>>>>>  +     }
>>>>>  + #endif    /* NEWRECLAIM_DEBUG */
>>>>>  + /*
>>>>>  +  * Now figure out how much free RAM we require to call the
>>>>> ARC cache st=
>>>>>  atus
>>>>>  +  * "ok".  Add the percentage specified of the total to the
>>>>> base require=
>>>>>  ment.
>>>>>  +  */
>>>>>  +
>>>>>  +     if (cnt.v_free_count < freepages + ((cnt.v_page_count
>>>>> / 100) * percent=
>>>>>  _target)) {
>>>>>  + #ifdef    NEWRECLAIM_DEBUG
>>>>>  +         if (xval !=3D 1) {
>>>>>  +             printf("ZFS ARC: RECLAIM total %u,
>>>>> free %u, free pct (%u), reserved =
>>>>>  (%u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count,
>>>>> ((cnt.v_fre=
>>>>>  e_count * 100) / cnt.v_page_count), freepages, percent_target);
>>>>>
>>>>>  +             xval =3D 1;
>>>>>  +         }
>>>>>  + #endif    /* NEWRECLAIM_DEBUG */
>>>>>  +         return(1);
>>>>>  +     } else {
>>>>>  + #ifdef    NEWRECLAIM_DEBUG
>>>>>  +         if (xval !=3D 0) {
>>>>>  +             printf("ZFS ARC: NORMAL total %u,
>>>>> free %u, free pct (%u), reserved (=
>>>>>  %u), target pct (%u)n", cnt.v_page_count, cnt.v_free_count,
>>>>> ((cnt.v_free=
>>>>>  _count * 100) / cnt.v_page_count), freepages, percent_target);
>>>>>  +             xval =3D 0;
>>>>>  +         }
>>>>>  + #endif    /* NEWRECLAIM_DEBUG */
>>>>>  +         return(0);
>>>>>  +     }
>>>>>  +
>>>>>  + #endif    /* __FreeBSD__ */
>>>>>  + #endif    /* NEWRECLAIM */
>>>>>  +
>>>>>         if (kmem_used() > (kmem_size() * 3) / 4)
>>>>>             return (1);
>>>>>     #endif    /* sun */
>>>>>    =20
>>>>>         if (spa_get_random(100) =3D=3D 0)
>>>>>             return (1);
>>>>>     #endif
>>>>>
>>>>>  --=20
>>>>>  -- Karl
>>>>>  karl at denninger.net [5]
>>>>>
>>>>>  --------------ms010508000607000909070805
>>>>>  Content-Type: application/pkcs7-signature; name="smime.p7s"
>>>>>  Content-Transfer-Encoding: base64
>>>>>  Content-Disposition: attachment; filename="smime.p7s"
>>>>>  Content-Description: S/MIME Cryptographic Signature
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFTzCC
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> BUswggQzoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwgZ0xCzAJBgNVBAYTAlVTMRAwDgYDVQQI
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> EwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> TEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkqhkiG9w0BCQEWIGN1c3Rv
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> bWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0MB4XDTEzMDgyNDE5MDM0NFoXDTE4MDgyMzE5
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> MDM0NFowWzELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExFzAVBgNVBAMTDkthcmwg
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> RGVubmluZ2VyMSEwHwYJKoZIhvcNAQkBFhJrYXJsQGRlbm5pbmdlci5uZXQwggIiMA0GCSqG
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> SIb3DQEBAQUAA4ICDwAwggIKAoICAQC5n2KBrBmG22nVntVdvgKCB9UcnapNThrW1L+dq6th
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> d9l4mj+qYMUpJ+8I0rTbY1dn21IXQBoBQmy8t1doKwmTdQ59F0FwZEPt/fGbRgBKVt3Quf6W
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 6n7kRk9MG6gdD7V9vPpFV41e+5MWYtqGWY3ScDP8SyYLjL/Xgr+5KFKkDfuubK8DeNqdLniV
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> jHo/vqmIgO+6NgzPGPgmbutzFQXlxUqjiNAAKzF2+Tkddi+WKABrcc/EqnBb0X8GdqcIamO5
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> SyVmuM+7Zdns7D9pcV16zMMQ8LfNFQCDvbCuuQKMDg2F22x5ekYXpwjqTyfjcHBkWC8vFNoY
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 5aFMdyiN/Kkz0/kduP2ekYOgkRqcShfLEcG9SQ4LQZgqjMpTjSOGzBr3tOvVn5LkSJSHW2Z8
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> Q0dxSkvFG2/lsOWFbwQeeZSaBi5vRZCYCOf5tRd1+E93FyQfpt4vsrXshIAk7IK7f0qXvxP4
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> GDli5PKIEubD2Bn+gp3vB/DkfKySh5NBHVB+OPCoXRUWBkQxme65wBO02OZZt0k8Iq0i4Rci
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> WV6z+lQHqDKtaVGgMsHn6PoeYhjf5Al5SP+U3imTjF2aCca1iDB5JOccX04MNljvifXgcbJN
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> nkMgrzmm1ZgJ1PLur/ADWPlnz45quOhHg1TfUCLfI/DzgG7Z6u+oy4siQuFr9QT0MQIDAQAB
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> o4HWMIHTMAkGA1UdEwQCMAAwEQYJYIZIAYb4QgEBBAQDAgWgMAsGA1UdDwQEAwIF4DAsBglg
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> hkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFHw4
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> +LnuALyLA5Cgy7T5ZAX1WzKPMB8GA1UdIwQYMBaAFF3U3hpBZq40HB5VM7B44/gmXiI0MDgG
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> CWCGSAGG+EIBAwQrFilodHRwczovL2N1ZGFzeXN0ZW1zLm5ldDoxMTQ0My9yZXZva2VkLmNy
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> bDANBgkqhkiG9w0BAQUFAAOCAQEAZ0L4tQbBd0hd4wuw/YVqEBDDXJ54q2AoqQAmsOlnoxLO
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 31ehM/LvrTIP4yK2u1VmXtUumQ4Ao15JFM+xmwqtEGsh70RRrfVBAGd7KOZ3GB39FP2TgN/c
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> L5fJKVxOqvEnW6cL9QtvUlcM3hXg8kDv60OB+LIcSE/P3/s+0tEpWPjxm3LHVE7JmPbZIcJ1
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> YMoZvHh0NSjY5D0HZlwtbDO7pDz9sZf1QEOgjH828fhtborkaHaUI46pmrMjiBnY6ujXMcWD
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> pxtikki0zY22nrxfTs5xDWGxyrc/cmucjxClJF6+OYVUSaZhiiHfa9Pr+41okLgsRB0AmNwE
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> f6ItY3TI8DGCBQowggUGAgEBMIGjMIGdMQswCQYDVQQGEwJVUzEQMA4GA1UECBMHRmxvcmlk
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> YTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExDMRwwGgYD
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> VQQDExNDdWRhIFN5c3RlbXMgTExDIENBMS8wLQYJKoZIhvcNAQkBFiBjdXN0b21lci1zZXJ2
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> aWNlQGN1ZGFzeXN0ZW1zLm5ldAIBCDAJBgUrDgMCGgUAoIICOzAYBgkqhkiG9w0BCQMxCwYJ
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDAzMjAxNzAwNTRaMCMGCSqGSIb3DQEJBDEW
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> BBQrRNHcjN1dwAlZu7blrh+3Vu7++TBsBgkqhkiG9w0BCQ8xXzBdMAsGCWCGSAFlAwQBKjAL
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> BglghkgBZQMEAQIwCgYIKoZIhvcNAwcwDgYIKoZIhvcNAwICAgCAMA0GCCqGSIb3DQMCAgFA
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> MAcGBSsOAwIHMA0GCCqGSIb3DQMCAgEoMIG0BgkrBgEEAYI3EAQxgaYwgaMwgZ0xCzAJBgNV
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> BAYTAlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoT
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> EEN1ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExLzAtBgkq
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> hkiG9w0BCQEWIGN1c3RvbWVyLXNlcnZpY2VAY3VkYXN5c3RlbXMubmV0AgEIMIG2BgsqhkiG
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 9w0BCRACCzGBpqCBozCBnTELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExEjAQBgNV
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> BAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1zIExMQzEcMBoGA1UEAxMTQ3Vk
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> YSBTeXN0ZW1zIExMQyBDQTEvMC0GCSqGSIb3DQEJARYgY3VzdG9tZXItc2VydmljZUBjdWRh
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> c3lzdGVtcy5uZXQCAQgwDQYJKoZIhvcNAQEBBQAEggIAsIbg6OIk2XnrPw25FA9+s4FCnKlo
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> Wz3KtfA59Gf2jX8yKEQW26k1QK6w62Zc8KaB5LypaOK1rZ5bipeu6rGHhgaG1oPXUMmcc2p4
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> E18URDskAfn5apoJ/n54nR94OqHfQ/EPBx711pxYtAGBLFOOzwrU2MEZCl2KBydI+Bw/E75R
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> WRIk6y0NqSWjgVWU2tJwnOEZj/2UGQCSvJ7h5t1n7idbDIfT88/hvAW3b3knRwPxwpZretXq
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 2BGgmv8lojr7Zui5sR/YdDjSK2yGHqo0mWkSAHp0Wts8okcoJNZSEispFRh56MWCIoJ51cki
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> pCZH/vX1EEsfka3CrlE7LWABAYf1biy+Xq/Bfxgq9oAaknGF2yM0jgR7xnjLYLvbv5pjt7ar
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> TH2JslJMYkJPKiYFJNEgVJ9wTVQtrCPJQPTk3R1qD3YFraly5Mgjwy5Ax5n8SW858WWOxHeP
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> vmL0j1boO0Re9qeAb9v/z8z3NPkFPZhBrEz3g6INCWil+2Vx1yruJvxm1oN9OMQSt2qY38rj
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> XWhWVxoQtW39LZc/xSNR41DQXvPJ8VOvyrmvLm7uTm4+lQYVUwNuLNbDFlj8slkAeXF/eR1S
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> 
>>> 4VuWtwexxCco+xGjbPTcZgap976XsvlRWOmjmwqZyGNuW7ZmcODQPFjQvpnBkx9Rm5cLndK6
>>>>>
>>>>>  OVorTQkAAAAAAAA=
>>>>>  --------------ms010508000607000909070805--
>>>>>
>>>>> _______________________________________________
>>>>> freebsd-fs at freebsd.org [6] mailing list
>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs [7]
>>>>> To unsubscribe, send any mail to
>>>>> "freebsd-fs-unsubscribe at freebsd.org" [8]
>>>
>>> -- Karl Denninger
>>>  karl at denninger.net [9]
>>>  _The Market Ticker_
>>>
>>>
>>>
>>> Links:
>>> ------
>>> [1] mailto:karl at denninger.net
>>> [2] mailto:bug-followup at FreeBSD.org
>>> [3] mailto:karl at fs.denninger.net
>>> [4] mailto:karl at denninger.net
>>> [5] mailto:karl at denninger.net
>>> [6] mailto:freebsd-fs at freebsd.org
>>> [7] http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> [8] mailto:freebsd-fs-unsubscribe at freebsd.org
>>> [9] mailto:karl at denninger.net
>>
>>
>>
>> %SPAMBLOCK-SYS: Matched [+mikej <mikej at mikej.com>], message ok
>>



More information about the freebsd-fs mailing list