Is there a database built into the base system

Polytropon freebsd at edvax.de
Sat Apr 8 19:03:51 UTC 2017


On Sat, 08 Apr 2017 13:35:09 -0400, Ernie Luzar wrote:
> Polytropon wrote:
> > On Sat, 08 Apr 2017 13:00:15 -0400, Ernie Luzar wrote:
> >> Here is my first try at using awk to Read every record in the input 
> >> file and drop duplicates records from output file.
> >>
> >>
> >> This what the data looks like.
> >> /etc >cat /ip.org.sorted
> >> 1.121.136.228;
> >> 1.186.172.200;
> >> 1.186.172.210;
> >> 1.186.172.218;
> >> 1.186.172.218;
> >> 1.186.172.218;
> >> 1.34.169.204;
> >> 101.109.155.81;
> >> 101.109.155.81;
> >> 101.109.155.81;
> >> 101.109.155.81;
> >> 104.121.89.129;
> > 
> > Why not simply use "sort | uniq" to eliminate duplicates?
> > 
> > 
> > 
> >> /etc >cat /root/bin/ipf.table.awk.dup
> >> #! /bin/sh
> >>
> >>    file_in="/ip.org.sorted"
> >>    file_out="/ip.no-dups"
> >>
> >>    awk '{ in_ip = $1 }'
> >>        END { (if in_ip = prev_ip)
> >>                 next
> >>               else
> >>                 prev_ip > $file_out
> >>                 prev_ip = in_ip
> >>            } $file_in
> >>
> >> When I run this script it just hangs there. I have to ctrl/c to break 
> >> out of it. What is wrong with my awk command?
> > 
> > For each line, you store the 1st field (in this case, the entire
> > line) in in_ip, and you overwrite (!) that variable with each new
> > line. At the end of the file (!!!) you make a comparison and even
> > request the next data line. Additionally, keep an eye on the quotes
> > you use: '...' will keep the $ in $file_out, that's now a variable
> > inside awk which is empty. The '...' close before END, so outside
> > of awk. Remember that awk reads from standard input, so your
> > redirection for the input file would need to be "< $file_in",
> > or useless use of cat, "cat $file_in | awk > $file_out".
> > 
> > In your specific case, I'd say not that awk is the wrong tool.
> > If you simply want to eliminate duplicates, use the classic
> > UNIX approach "sort | uniq". Both tools are part of the OS.
> > 
> 
> The awk script I posted is a learning tool. I know about "sort | uniq"
> 
> I though "end" was end of line not end of file. So how should that awk 
> command look to drop dups from the out put file?

In that case, I'd suggest to drop the sh wrapper and put
everything into the awk file, for learning purposes. :-)

Here is a suggestion with comments.




#!/usr/bin/awk -f

BEGIN {
	# output file name
	ARGV[1] = "/ip.org.sorted"
        ARGC = 2;
	# output file name
	file_out = "/ip.no-dups"
	# reset output file
	printf("") > file_out
	# temporary ip
	temp = ""
}

# process all lines which are not empty
(length > 0) {
	# remove ; at the end of the line
        gsub(";", "", $0);
	# new ip?
	if (temp != $1) {
		printf("%s\n", $1) >> file_out
		temp = $1
	}
	# ip already known, do not output anything, "empty else branch"
}



As you can see, you don't need an END block when you're not
going to do anything at the end of the processing, i. e.,
after EOF on input.

You can match lines against several patterns. Example:

{ ... }		-> process all lines
(cond) { ... }	-> process line of condition "cond" is true
/regex/ { ... }	-> process line if regular expression "regex" matches

You can of course combine patterns, for example:

/^[^#]/ && (length > 100) { ... }

That would process all lines not starting with a # which are
longer than 100 characters.



In my example, I wanted to show how it is possible to have
awk "instead of" sh. But keep in mind using awk as a filter
is much better. For illustration:


#!/bin/sh

file_in="/ip.org.sorted"
file_out="/ip.no-dups"

cat ${file_in} | awk '

BEGIN {
	# temporary ip
	temp = ""
}

# process all lines which are not empty
(length > 0) {
	# remove ; at the end of the line
        gsub(";", "", $0);
	# new ip?
	if (temp != $1) {
		printf("%s\n", $1)
		temp = $1
	}
	# ip already known, do not output anything, "empty else branch"
}
' > ${file_out}



As you can see now, the awk code is inside the sh wrapper.
If it was in a separate file, you'd probably do something
like this:



#!/bin/sh

file_in="/ip.org.sorted"
file_out="/ip.no-dups"

cat ${file_in} | awk -f remove_dup_ip.awk > ${file_out}



This again is a nice illustration of "useless use of cat". ;-)
The form awk -f <program> < <input> > <output> is probably
more efficient, but it "breaks" the idea of the pipeline
where you can add or remove steps, supply testing data
instead of the real data, or create temporary results for
later comparison.







-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...


More information about the freebsd-questions mailing list