freebsd-questions Digest, Vol 876, Issue 9

RW rwmaillists at
Mon Mar 29 14:10:02 UTC 2021

On Mon, 29 Mar 2021 09:52:25 +0800
wa5qjh wrote:

> and it will even if cached and ive watched it download it all
> again,   

Even if that's true, if you have a poor line there is no advantage in
downloading largest first. If the connection fails after 30 minutes you
have wasted 30 minutes of downloading regardless of order. You get the
minimum waste by downloading slowest first. If you take account of the
latency and TCP ramp-up, the slowest transfers are on average the
smaller files. 

> means you have to spend all that time waiting to see if its
> going to finish or not. Believe me, Its a big time and (precious)
> bandwidth waster!   

I run it in a loop.
> I've seen it get to 99% downloaded then declared a
> file size mismatch and   

I've seen some packages fail at ~99% at lot, but not recently. I think
it was a server-side problem that got fixed.

>fetch as used, doesn't employ the resume feature so whatever was
>downloaded last time isn't used the next even if only 2 minutes
>prior. partial downloads are not cached.   

I'd like to see this change too.  I don't recall the history of this
with pkg, but in ports it used to work, but fell-off. 

>I'm rather curious tho, since I'm pretty
> certain theres a good reason for making the big downloads at or near
> last, what is it?  

I think it just ends-up that way. Actually, when I downloaded distfiles
on dial-up or slow ADSL, I much preferred to leave the largest til last.
Some of the larger ports update frequently, so when you download in
more than one session it minimizes waste.

More information about the freebsd-questions mailing list