YAPIB (was: Drawing graphics on terminal)

Paul Robinson paul at iconoplex.co.uk
Fri Jun 20 02:47:21 PDT 2003

On Thu, Jun 19, 2003 at 12:11:26PM -0700, Tim Kientzle wrote:

> That would seem to be the hard part.  I presume you've
> looked at SUSE's YAST, Debian's APT, and other such tools?

*nods* - nice basis, but not... well... you know.
> What I have now works as follows:
>  * Start reading the package, extract the packing list, parse it.
>  * Identify any requirements, recursively install those.
>  * Continue reading/installing this package.

To clarfiy your meaning of recursiveness in this context (I can think of
another way of doing it "recursively" and I don't have time to peek at code
right now), if I want to install package A which requires B and C, B
requires D and E, and D requires F, your installer would go Start A -> I
need B -> Start B -> I need D -> Start D -> I need F -> Install F -> Install
D -> I need E -> Install E -> Install B -> Install C
> This has a big problem with network installs.  In particular,
> the download gets stalled while the requirements are added.
> Over a slow connection, this could leave you with a stalled
> TCP connection for hours.  Not good.

In the chain above, if F isn't available for some reason, you have A, B and 
D all half installed on your machine waiting for a 32Kb library on an 
un-mirrored FTP server in bulgaria... hmmm...
> One way to address this would be to separate "install-time"
> requirements from "run-time" requirements.  The ports framework
> already has some concept of this (separates "build-time" from
> "run-time"), but it doesn't seem quite sufficient.

If you need it at run time, surely it make sense to grab it at build time? 
I'm not sure I can see the benefit of seperating it out - you're just going 
to create a sense of not knowing whether your application is ready to go or 
not because it's "installed" but doesn't have the kit it needs around it to 
make it actually *work*.... hmmmm...
> I'm looking at a couple of approaches.  One is to eliminate -r and instead
> have a simple list of package sources that get inspected.  Debian's package
> management does something similar to this.  For example, you might have
> an /etc/pkg.conf that specifies the following sources:
>    .
>    /usr/packages/distfiles
>    /nfs-server/packages/distfiles
>    ftp://ftp3.freebsd.org/some/path/packages-5.8-release/
>    ftp://ftp.joesfreebsdsite.org/some/path/packages-5.8-release/
>    cdrom:"FreeBSD 5.8 CDROM #2":/cdrom/packages

Yup, that's what I was thinking, but you would have such a file for each
package, thereby meaning packages can live all over the place. In addition,
you wouldn't need that file on the local machine, and for backward
compatability, the -r switch grabs a file with that info off of the mirrors,
the same way the actual packages are now. It means that in 3-4 years when
people are no longer trying to do package management with the current stuff,
the mirrors *could* reclaim some disk space. This is likely to be an issue
if we want to try and get as much stuff out there as possible run up as

> installed.  In particular, note that this should allow us to support
> the CD-ROMs more efficiently, by locating packages on particular CD-ROMs
> and then prompting the user to insert the correct CD.

There is a minor issue here, around the way I'm planning on helping out the 
OEM/release engineering stuff as part of the installer effort, in that the 
package might not be on "FreeBSD 5.8 CDROM #2" but rather on "Dell OEM 
FreeBSD 6.2 Disk 1", but that's my problem. The more I think about it, the 
less of an issue it becomes, as I've just written some code in my head 
around building the release disks that sorts some of this out, but it's an 
extra req.
> Note that this is simpler than having some form of "master redirect"
> file, since each repository only needs to track what it provides,
> not what other repositories might offer. Users can mix and match
> repositories as needed.

I'm thinking about backward compatability on the command line for -r that 
grabs the "master re-direct" file in the format above..
> No opinion on this one.  Perhaps you could formulate a couple of
> scenarios in which it would be beneficial to be able to mix and
> match these two?

Where the port exists, but a pre-built binary isn't available, or where 
somebody wants easy install with his own configure options. So, in your file 
above, but where you're explicitly discussing a specific package rather than 
packages in general, you could have a line, for example:


with command line switches to force the ports option, pass extra args to
configure, etc. This means as and administrator you have one 'place' to look
after third-party code, you get the advantage of being able to wrap ports
into the /var/db/pkg DB, and if the binaries don't exist you can go back to
building from scratch. In fact, with a bit more work, you could write a
switch to make a package that can then go onto a mirror with nothing more
than a command-line switch, based on the port. It also means that if
somebody wants to port their application to FreeBSD and distribute a
pre-built binary rather than distribute source, they can locally follow the
porter's handbook, build their own port, turn it into a package, and then go
out and sell it in stores - i.e. it encourages commercial involvement in
FreeBSD which is no bad thing. You also suddenly make pkg_add able to handle 
not only pre-built binaries but grabs all the effort of the ports guys as 

Anyway, I think we're both talking about the same thing here, except you're 
thinking of a main pkg.conf file, whereas I'm thinking of a DB on disk, or 
retrieved over the network for EACH PACKAGE, with the benefit of bringing 
ports in under the package management tree as a fall-back if the binary 
isn't there, and as an excellent way of helping commercial entities start 
selling apps for FreeBSD.

Just some ideas... 

Paul Robinson

More information about the freebsd-libh mailing list