gsched: modernize or remove?

Poul-Henning Kamp phk at phk.freebsd.dk
Fri Dec 27 22:00:44 UTC 2019


--------
In message <CANCZdfovCe1UQmC+94QK5P-imLEBygoyQLDqNSdDPeYryoL=bA at mail.gmail.com>
, Warner Losh writes:
>On Fri, Dec 27, 2019 at 10:53 AM Alexander Motin <mav at freebsd.org> wrote:
>
>> Hi,
>>
>> As I can see, gsched code was not really maintained for the last 10
>> years since being added.
>> as direct dispatch, unmapped I/O, stripesize/stripeoffset, resize, etc.
>>  Even if some of them may require just a proper declaration, it tells me
>> that barely anybody used it seriously for years.  But my primary concern
>> is the `gsched insert` implementation.  Right now I got to it since it
>> is the last consumer of nstart/nend counters in GEOM, which I would like
>> to remove for performance reasons.  But I also see tons of potential
>> problems with idea of moving providers between unaware geoms.
>>
>> So my question is: does it make sense to try fix/modernize it, or it
>> just be easier to remove it?  Does anybody still use it, or see some
>> future for it?

Gsched was always a weird thing IMO.

I was happy to see that you could do stuff like that with GEOM, but
for the life of me I could never figure out why you would want to
do it with GEOM which is a very low-information environment when
it comes to scheduling decisions.

I belive the original inspiration was "Anticipatory disk-scheduling"
which tries to mitigate some starvation issues which you can have
with a normal elevator disksort on systems with very few ioreq
sources[1].

With SSDs all but having erased seek-time from the surface of the
planet, and huge caches in drives, controllers and pretty much
everywhere else people have been able to squeeze one in, it is not
even obvious to me if it makes sense to have any disksort in the
first place[2], much less gssched.

Poul-Henning

[1] Imagine one process doing lots of work on the inner tracks and
another on the outher track, if either processes is fast enough,
it can starve out the other one, because its work is always closer.
Traditionally disksorts have had "no changing direction until you get
to the extreme request" hacks to ensure some fairness, but that
can get to to worst-case-seek-time per I/O request land.

[2] Not sure if anybody has looked at this yet, otherwise: Good
project to get your hands wet with disk-I/O and benchmarking.
NB: Beware of clustering.


-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.


More information about the freebsd-geom mailing list