fetchListFTP: implemented, but crippled
Tim Kientzle
kientzle at acm.org
Fri May 23 21:48:46 PDT 2003
I have a rough implementation of
fetchListFTP that I'm using. I've attached
a diff against a recent -STABLE.
Haven't yet tested against -CURRENT.
Unfortunately, there's an ugly problem here.
The fetch(3) manpage specifies that this function returns
a malloc-ed array of "struct url_ent".
This was not a wise decision:
ftp.freebsd.org/.../packages-4.7-release/All
requires almost 8 megabytes of RAM in this form,
rather than the <500kB it should require.
Easiest fix: move 'name' field to end (which
permits variably-sizing it) and redefine API
to return a linked-list
Better fix: redesign API to return one
entry at a time. Then client code can manage
storage as appropriate. With this approach,
struct url_ent could remain unchanged.
The current API makes fetchListFTP pretty much
unusable for me; I'm not comfortable with that
kind of memory usage. If there's any concensus
about which fix is appropriate, I can start
putting it together.
Tim
-------------- next part --------------
Index: ftp.c
===================================================================
RCS file: /usr/src/cvs/src/lib/libfetch/ftp.c,v
retrieving revision 1.16.2.27
diff -u -r1.16.2.27 ftp.c
--- ftp.c 25 Jul 2002 15:22:37 -0000 1.16.2.27
+++ ftp.c 24 May 2003 04:26:42 -0000
@@ -997,11 +997,93 @@
}
/*
+ * Note: Different ftp servers return file listings in different
+ * formats. There may need to be many versions of this function,
+ * depending on the server.
+ */
+static int
+_ftp_default_parse_entry(struct url_ent *url_ent, char *buff)
+{
+ char *p;
+
+ /* Someday, try to parse this information from the provided listing */
+ url_ent->stat.size = -1;
+ url_ent->stat.mtime = 0;
+ url_ent->stat.atime = 0;
+
+ /* Use last whitespace-separated token as filename */
+ p = buff + strlen(buff) - 1; /* Start at end and work backwards */
+ while(p > buff && isspace(*p)) { *p = 0; p--; } /* Erase trailing w/s*/
+ while(p > buff && !isspace(*p)) p--;
+ if(isspace(*p)) p++;
+ strlcpy(url_ent->name, p, sizeof(url_ent->name));
+
+ return 0;
+}
+
+
+/*
* List a directory
*/
struct url_ent *
-fetchListFTP(struct url *url __unused, const char *flags __unused)
+fetchListFTP(struct url *url, const char *flags)
{
- warnx("fetchListFTP(): not implemented");
- return NULL;
+ FILE *f;
+ char buff[MAXPATHLEN];
+ struct url_ent *url_list, *new_url_list;
+ int url_list_length, new_url_list_length;
+ int (*_parser)(struct url_ent *, char *);
+ int url_index;
+
+ f = _ftp_request(url,"LIST",NULL,_ftp_get_proxy(),flags);
+ if(!f) {
+ fprintf(stderr,"Error: %s\n",strerror(errno));
+ return NULL;
+ }
+ /* Different FTP servers return LIST info in different formats */
+ /* Set _parser to point to an appropriate routine for this server. */
+ _parser = _ftp_default_parse_entry;
+
+ url_list_length = 4;
+ url_list = malloc(sizeof(struct url_ent) * url_list_length);
+ memset(url_list,0,sizeof(struct url_ent) * url_list_length);
+ url_index = 0;
+
+ /* Could use an alternate strategy here: build linked list of url_ent,
+ * then size an array exactly and copy everything over into
+ * the array. The current code just resizes the array bigger
+ * as needed. */
+
+ /* Another option: change the API to return one directory entry
+ * at a time a la readdir(3)... */
+
+ while(fgets(buff,sizeof(buff),f)) {
+ /* If we need a longer array, lengthen it */
+ if(url_index >= url_list_length) {
+ new_url_list_length = url_list_length * 2;
+ new_url_list = malloc(sizeof(struct url_ent)
+ * new_url_list_length);
+ fprintf(stderr,"fetchListFTP: malloc(%d)\n",
+ sizeof(struct url_ent)
+ * new_url_list_length);
+ memcpy(new_url_list, url_list,
+ sizeof(struct url_ent)*url_list_length);
+ memset(new_url_list+url_list_length,0,
+ sizeof(struct url_ent)
+ *(new_url_list_length - url_list_length));
+ free(url_list);
+ url_list = new_url_list;
+ url_list_length = new_url_list_length;
+ }
+ /* Skip to next list element if line parses ok */
+ /* _parser returns !=0 for extraneous lines such as totals */
+ if(_parser(&(url_list[url_index]),buff) == 0) {
+ url_index++;
+ }
+ }
+ /* TODO: determine whether an error occurred. If so, what do I do? */
+ fclose(f);
+
+ /* TODO: Resize url_list down to minimal size */
+ return url_list;
}
More information about the freebsd-hackers
mailing list