first however you need to build the locate database before you can use it.
run the script /usr/libexec/locate.updatedb
now do locate .html
ok that was to many files, do locate .html |more
ok locate all the instances of .xinitrc
how many html files are there?
try:
locate .html |wc -l
locate your netscape binary if netscape is on your machine...
is netscape in your path?
which netscape
if not...
lets make a symlink
ln -s /usr/local/netscape/netscape /usr/local/bin/netscape
rehash
cd /u
mkdir log-exercise
ftp t1-noc.t1.ws.afnog.org
cd pub
ls -la
get the file: access.log.9.gz
exit ftp
ls -la
how big is the file?
now, uncompress the file.
gunzip access.log.9.gz
do an ls -l access.log.9
that file is pretty big (150MB)
lets recompress it
gzip -9 access.log.9
hm.... that going to take a while, lets backgound it.
control-z
at the new prompt: bg
process is now running in the background
do a ps
... hmm... a few processes.
lets find just that process, ps|grep gzip
ok we couple go do something else but it should be done now...
lets just manipulate the log file compressed.
take a look at what the contents of the file look like, zcat access.log.9.gz|more
hm... looks like squid log file
what useful things can we learn from this file?
in a squid log file each request gets it's own line so if we count the lines in the file we can figure out how many requests there werre that day.
lets try, zcat access.log.9.gz|wc -l
ok that's a big number
now lets get raw hits....
try, zcat access.log.9.gz|grep HIT|wc -l