BigData Lab01 02
BigData Lab01 02
BigData Lab01 02
Information Management
Table of Contents
1 Introduction ....................................................................................................................................................................................... 3
2 About this Lab ................................................................................................................................................................................... 3
3 Environment Setup Requirements................................................................................................................................................... 3
3.1 Getting Started .................................................................................................................................................................................................... 3
5 MapReduce ...................................................................................................................................................................................... 13
5.1 Running the WordCount program ..................................................................................................................................................................... 13
1 Introduction
The overwhelming trend towards digital services, combined with cheap storage, has generated massive amounts of data that
enterprises need to effectively gather, process, and analyze. Techniques from the data warehousing and high-performance
computing communities are invaluable for many enterprises. However, often times their cost or complexity of scale-up
discourages the accumulation of data without an immediate need. As valuable knowledge may nevertheless be buried in this
data, related scaled-up technologies have been developed. Examples include Google’s MapReduce, and the open-source
implementation, Apache Hadoop.
Hadoop is an open-source project administered by the Apache Software Foundation. Hadoop’s contributors work for some of the
world’s biggest technology companies. That diverse, motivated community has produced a collaborative platform for
consolidating, combining and understanding data.
Technically, Hadoop consists of two key services: data storage using the Hadoop Distributed File System (HDFS) and large
scale parallel data processing using a technique called MapReduce
· Use Hadoop commands to run a sample MapReduce program on the Hadoop system
For help on how to obtain these components please follow the instructions specified in VMware Basics and Introduction from
module 1.
1. Start the VMware image by clicking the button in VMware Workstation if it is not already on.
2. Log in to the VMware virtual machine using the following information:
· User: biadmin
· Password: password
3. Open Gnome Command Prompt Window by right-clicking on the Desktop and selecting “Open in Terminal”.
or
cd /opt/ibm/biginsights/bin
5. Start the Hadoop components (daemons) on the BigInsights server. You can practice starting all components with these
commands. Please note they will take a few minutes to run:
./start-all.sh
i Note: You may get an error that the server has not started, please be patient as it does take some time for the server to complete start.
6. Sometimes certain hadoop components may fail to start. You can start and stop the failed components one at a time by
using start.sh or stop.sh respectively. For example, to start and stop Hadoop use:
./start.sh hadoop
./stop.sh hadoop
In the following example, the console component failed. The particular component was then started again using
the ./start.sh console command. It then succeeded without any problems. This approach can be used for
any failed components.
Once all components have started successfully you can then move to the next section.
1. You can use the command-line approach and invoke the FileSystem (fs) shell using the format: hadoop fs <args>.
This is the method we will use in this lab..
2. You can also manipulate HDFS using the BigInsights Web Console. You will explore the BigInsights Web Console on
another lab.
In this part, we will explore some basic HDFS commands. All HDFS commands start with hadoop followed by dfs (distributed
file system) or fs (file system) followed by a dash, and the command. Many HDFS commands are similar to UNIX commands.
For details, refer to the Hadoop Command Guide and Hadoop FS Shell Guide.
We will start with the hadoop fs –ls command which returns the list of files and directories with permission information.
Ensure the Hadoop components are all started, and from the same Gnome terminal window as before (and logged on as
biadmin), follow these instructions:
1. List the contents of the root directory.
hadoop fs -ls /
or
hadoop fs -ls /user/biadmin
Note that in the first command there was no directory referenced, but it is equivalent to the second command where
/user/biadmin is explicitly specified. Each user will get its own home directory under /user. For example, in the case of
user biadmin, his home directory is /user/biadmin. Any command where there is no explicit directory specified will be
relative to the user’s home directory.
3. To create the directory myTestDir you can issue the following command:
hadoop fs -mkdir myTestDir
Where was this directory created? As mentioned in the previous step, any relative paths will be using the user’s home
directory.
4. Issue the ls command again to see the subdirectory myTestDir:
hadoop fs -ls
or
hadoop fs -ls /user/biadmin
i Note: If you specify a relative path to hadoop fs commands, they will implicitly be relative to your user directory in HDFS. For
example when you created the directory myTestDir, it was created in the /user/biadmin directory.
To use HDFS commands recursively generally you add an “r” to the HDFS command (In the Linux shell this is generally
done with the “-R” argument).
5. For example, to do a recursive listing we’ll use the –lsr command rather than just –ls, like the examples below:
As you can see the grep command only returned the lines which had test in them (thus removing the “Found x items”
line and the .staging and oozie-biad directories from the listing
7. To move files between your regular Linux filesystem and HDFS you can use the put and get commands. For example,
move the text file README to the hadoop filesystem.
You should now see a new file called /user/biadmin/README listed as shown above. Note there is a ‘1’ highlighted in
the figure. This represents the replication factor. By default, the replication factor in a BigInsights cluster is 3, but since
this laboratory environment only has one node, the replication factor is 1.
8. In order to view the contents of this file use the –cat command as follows:
You should see the output of the README file (that is stored in HDFS). We can also use the linux diff command to see
if the file we put on HDFS is actually the same as the original on the local filesystem.
9. Execute the commands below to use the diff command:
cd /home/biadmin/bootcamp/input/lab01_HadoopCore/HDFS/
Since the diff command produces no output we know that the files are the same (the diff command prints all the lines in
the files that differ).
To find the size of files you need to use the –du or –dus commands. Keep in mind that these commands return the file
size in bytes.
10. To find the size of the README file use the following command:
11. To find the size of all files individually in the /user/biadmin directory use the following command:
12. To find the size of all files in total of the /user/biadmin directory use the following command:
hadoop fs -help
14. For specific help on a command, add the command name after help. For example, to get help on the dus command
you’d do the following:
5 MapReduce
Now that we’ve seen how the FileSystem (fs) shell can be used to execute Hadoop commands to interact with HDFS, the same
fs shell can be used to launch MapReduce jobs. In this section, we will walk through the steps required to run a MapReduce
program. The source code for a MapReduce program is contained in a compiled .jar file. Hadoop will load the JAR into HDFS
and distribute it to the data nodes, where the individual tasks of the MapReduce job will be executed. Hadoop ships with some
example MapReduce programs to run. One of these is a distributed WordCount program which reads text files and counts how
often words occur.
2. Review the files have been copied with the following command:
In this case, the output was not split into multiple files.
5. To view the contents of the part-r-0000 file issue the command below:
i Note: You can use the BigInsights Web Console to run applications such as WordCount. This same application (though with different
Input files) will be run again in the lab describing the BigInsights Web Console. More detail about the job will also be described then.
cd /home/biadmin/bootcamp/input/lab01_HadoopCore/PigHiveJaql
head -5 googlebooks-1988.csv
The columns these data represent are the word, the year, the number of occurrences of that word in the corpus, the
number of pages on which that word appeared, and the number of books in which that word appeared.
Note that directory /user/biadmin/pighivejaql is created automatically for you when the above command is executed.
3. Start pig. If it has not been added to the PATH, you can add it, or switch to the $PIG_HOME/bin directory
cd $PIG_HOME/bin
./pig
4. We are going to use a Pig UDF to compute the absolute value of each integer. The UDF is located inside the
piggybank.jar file (This jar file was created from the source, following the instructions in
https://cwiki.apache.org/confluence/display/PIG/PiggyBank, and copied to the piggybank directory). We use the
REGISTER command to load this jar file:
REGISTER /opt/ibm/biginsights/pig/contrib/piggybank/java/piggybank.jar;
This returns instantly. The processing is delayed until the data needs to be reported.
7. Sum the word counts for each word length using the SUM function with the FOREACH GENERATE command.
8. Use the DUMP command to print the result to the console. This will cause all the previous steps to be executed.
DUMP final;
9. Quit pig.
grunt> quit
1. Ensure the Apache Derby component is started. Apache Derby is the default database used as metastore in Hive. A
quick way to verify if it is started, is to try to start it using:
start.sh derby
2. Start hive interactively. Change the directory to the $HIVE_HOME/bin first, and execute from there using ./hive
cd $HIVE_HOME/bin
./hive
CREATE TABLE wordlist (word STRING, year INT, wordcount INT, pagecount INT,
bookcount INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
4. Load the data from the googlebooks-1988.csv file into the wordlist table.
5. Create a table named wordlengths to store the counts for each word length for our histogram.
6. Fill the wordlengths table with word length data from the wordlist table calculated with the length function.
7. Produce the histogram by summing the word counts grouped by word length.
quit;
cd /home/biadmin/bootcamp/input/lab01_HadoopCore/PigHiveJaql
head -5 googlebooks-1988.del
The columns these data represent are the word, the year, the number of occurrences of that word in the corpus, the
number of pages on which that word appeared, and the number of books in which that word appeared.
3. Change directory to $JAQL_HOME\bin, and then execute ./jaqlshell to start the JaqlShell.
cd $JAQL_HOME/bin
./jaqlshell
4. Read the comma delimited file from HDFS. Note that this operation might take a few minutes to complete.
5. Transform each word into its length by applying the strLen function.
6. Produce the histogram by summing the word counts grouped by word length.
7. Quit Jaql.
quit;
9 Summary
You have just completed Lab 1 which focused on the basics of the Hadoop platform, including HDFS, MapReduce, Pig, Hive,
and Jaql. You should now know how to perform the following basic tasks on the platform:
IBM Canada
8200 Warden Avenue
Markham, ON
L6G 1C7
Canada
IBM, the IBM logo, ibm.com and Tivoli are trademarks or registered
trademarks of International Business Machines Corporation in the
United States, other countries, or both. If these and other
IBM trademarked terms are marked on their first occurrence in this
information with a trademark symbol (® or ™), these symbols indicate
U.S. registered or common law trademarks owned by IBM at the time
this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list
of IBM trademarks is available on the Web at “Copyright and
trademark information” at ibm.com/legal/copytrade.shtml
Product data has been reviewed for accuracy as of the date of initial
publication. Product data is subject to change without notice. Any
statements regarding IBM’s future direction and intent are subject to
change or withdrawal without notice, and represent goals and
objectives only.