
In our previously published guides, we talked about installing Hadoop, installing Pig on own server. We also talked about free IBM Analytics Demo Cloud with needed softwares pre-installed, ready to use. Here is basic example with commands to show how to process server log in Hadoop, Pig can be done using free IBM Analytics Demo Cloud. This guide can be followed by even who never used Hadoop or Pig. We collected a script for the easy work. We only can not supply our server log to you as IP address collected by our server is sensitive data and may be prohibited to share in EU countries to protect privacy of our readers. You need to collect Apache web server access log from somewhere to really follow this guide.

How To Process Server Log in Hadoop, Pig

First, SSH to IBM Analytics Demo Cloud. Then wget your access log file in your desired location, for us location is ~ aka $HOME aka where you’ll be after login via SSH. In our example, the access log file’s name is access.log.1 .

In case you are on SSH of your server, then use CLI FTP commands to upload (commands are under the sub header “Transfer FTP Files Directly From One Server to Another : Major Methods With Command Line”) the log to IBM Analytics Demo Cloud instance.

Advertisement ---

First we are feeding the data to Hadoop with this command :

hadoop fs -put access.log.1 1 hadoop fs - put access . log . 1

If you face error, append location where diskX is disk number like disk2, USERNAME is like admin1234 (which IBM Analytics Demo Cloud provided you to SSH) :

hadoop fs -put access.log.1 /diskX/home/USERNAME/ 1 hadoop fs - put access . log . 1 / diskX / home / USERNAME /

If you run this command :

hadoop fs -ls -R 1 hadoop fs - ls - R

You’ll get output like this where you’ll find your thing listed :

drwx------+ - USERNAME USERNAME 0 2017-07-20 17:04 .staging -rw-rw----+ 3 USERNAME USERNAME 12183708 2017-07-20 16:33 access.log.1 1 2 drwx -- -- -- + - USERNAME USERNAME 0 2017 - 07 - 20 17 : 04 . staging - rw - rw -- -- + 3 USERNAME USERNAME 12183708 2017 - 07 - 20 16 : 33 access . log . 1

Now we will create a script named script.pig at the same location with this content :

DEFINE ApacheCommonLogLoader org.apache.pig.piggybank.storage.apachelog.CommonLogLoader(); logs = LOAD '/path/to/USERNAMER/access.log.1' USING ApacheCommonLogLoader AS (addr: chararray, logname: chararray, user: chararray, time: chararray, method: chararray, uri: chararray, proto: chararray, status: int, bytes: int); addrs = GROUP logs BY addr; counts = FOREACH addrs GENERATE flatten($0), COUNT($1) as count; DUMP counts; 1 2 3 4 5 6 7 8 9 DEFINE ApacheCommonLogLoader org . apache . pig . piggybank . storage . apachelog . CommonLogLoader ( ) ; logs = LOAD '/path/to/USERNAMER/access.log.1' USING ApacheCommonLogLoader AS ( addr : chararray , logname : chararray , user : chararray , time : chararray , method : chararray , uri : chararray , proto : chararray , status : int , bytes : int ) ; addrs = GROUP logs BY addr ; counts = FOREACH addrs GENERATE flatten ( $ 0 ) , COUNT ( $ 1 ) as count ; DUMP counts ;

Change /path/to/USERNAMER/access.log.1 in the above example to real path and save the file. Now run this command :

locate piggybank.jar 1 locate piggybank . jar

You will get a list of paths. At the bottom of the list, there will be entries like :

... /usr/ibmpacks/bigsheets/5.4/libext/piggybank.jar /usr/iop/4.1.0.0/hive/lib/piggybank.jar /usr/iop/4.1.0.0/pig/piggybank.jar /usr/iop/4.1.0.0/pig/lib/piggybank.jar 1 2 3 4 5 . . . / usr / ibmpacks / bigsheets / 5.4 / libext / piggybank . jar / usr / iop / 4.1.0.0 / hive / lib / piggybank . jar / usr / iop / 4.1.0.0 / pig / piggybank . jar / usr / iop / 4.1.0.0 / pig / lib / piggybank . jar

We will use that /usr/iop/4.1.0.0/pig/piggybank.jar path. Run :

pig 1 pig

You will get MySQL like interface :

grunt> 1 grunt >

Run this command there :

REGISTER '/usr/iop/4.1.0.0/pig/piggybank.jar'; 1 REGISTER '/usr/iop/4.1.0.0/pig/piggybank.jar' ;

To exit the interface, run :

quit 1 quit

Now run :

pig -x local script.pig 1 pig - x local script . pig

With the script, you’ll get output of list of IP like this :

... (97.88.119.105,1) (103.73.224.177,2) (112.134.38.209,12) (112.215.240.56,1) (112.76.230.153,1) (115.64.142.113,1) (117.202.230.80,1) (118.163.69.237,1) (118.71.254.244,1) (120.188.87.203,2) ... 1 2 3 4 5 6 7 8 9 10 11 12 . . . ( 97.88.119.105 , 1 ) ( 103.73.224.177 , 2 ) ( 112.134.38.209 , 12 ) ( 112.215.240.56 , 1 ) ( 112.76.230.153 , 1 ) ( 115.64.142.113 , 1 ) ( 117.202.230.80 , 1 ) ( 118.163.69.237 , 1 ) ( 118.71.254.244 , 1 ) ( 120.188.87.203 , 2 ) . . .

Notice the above syntax – IP followed by number of times separated by comma. That is what was intended to show you a basic demo.

Tagged With

This Article Has Been Shared 537 Times! Pinterest

About Abhishek Ghosh Abhishek Ghosh is a Businessman, Orthopaedic Surgeon, Author and Blogger. You can keep touch with him on Twitter - @AbhishekCTRL.