100+ Hadoop Interview Questions From Interviews
100+ Hadoop Interview Questions From Interviews
100+ Hadoop Interview Questions From Interviews
WWW.BIGDATAINTERVIEWQUESTIONS.COM
[email protected]
MATURING HADOOP INTERVIEWS
Companies from almost all domains have started
investing in Big Data technologies increasing the need
for Hadoop professionals.
Assume you have Research, Marketing and Finance teams funding 60%, 30% and
10% respectively of your Hadoop Cluster. How will you assign only 60% of cluster
resources to Research, 30% to Marketing and 10% to Finance during peak load?
How do you benchmark your Hadoop cluster with tools that come with Hadoop?
Assume you are doing a join and you notice that all but one reducer is running for a
long time how do you address the problem in Pig?
Continued….
HADOOP INTERVIEW QUESTIONS…
Assume you have a sales table in a company and it has sales entries from
salesman around the globe. How do you rank each salesperson by country
based on their sales volume in Hive?
Can you change the number of mappers to be created for a job in Hadoop?
This is an open ended question and the interviewer is trying to see the
level of hands-on experience you have in solving production issues. Use
your day to day work experience to answer this question. Here are some
of the scenarios and responses to help you construct your answer. On a
very high level you will follow the below steps.
Scenario 1 - Job with 100 mappers and 1 reducer takes a long time for the
reducer to start after all the mappers are complete. One of the reasons
could be that reduce is spending a lot of time copying the map outputs.
So in this case we can try couple of things.
1. Make sure the joins are made in an optimal way with memory usage
in mind. For e.g. in Pig joins, the LEFT hand side tables are sent to
the reducer first and held in memory and the RIGHT most table in
streamed to the reducer. So make sure the RIGHT most table is
largest of the datasets in the join.
2. We can also increase the memory requirements needed by the map
and reduce tasks by setting – mapreduce.map.memory.mb and
mapreduce.reduce.memory.mb
How do you debug a performance issue or a
long running job?
Scenario 3 - Understanding the data helps a lot in optimizing the way we
use the datasets in PIG and HIVE scripts.
1. If you have smaller tables in join, they can be sent to distributed
cache and loaded in memory on the Map side and the entire join can
be done on the Map side thereby avoiding the shuffle and reduce
phase altogether. This will tremendously improve performance. Look
up USING REPLICATED in Pig and MAPJOIN or
hive.auto.convert.join in Hive
2. If the data is already sorted you can use USING MERGE which will do
a Map Only join
3. If the data is bucketted in hive, you may use
hive.optimize.bucketmapjoin or
hive.optimize.bucketmapjoin.sortedmerge depending on the
characteristics of the data
How do you debug a performance issue or a
long running job?
Scenario 4 – The Shuffle process is the heart of a MapReduce program
and it can be tweaked for performance improvement.
1. If you see lots of records are being spilled to the disk (check for
Spilled Records in the counters in your MapReduce output) you can
increase the memory available for Map to perform the Shuffle by
increasing the value in io.sort.mb. This will reduce the amount of
Map Outputs written to the disk so the sorting of the keys can be
performed in memory.
2. On the reduce side the merge operation (merging the output from
several mappers) can be done in disk by setting the
mapred.inmem.merge.threshold to 0
Assume you have Research, Marketing and Finance teams funding 60%, 30% and 10%
respectively of your Hadoop Cluster. How will you assign only 60% of cluster resources
to Research, 30% to Marketing and 10% to Finance during peak load?
For this use case, you would have to define 3 queues under the root queue
and give appropriate capacity in % for each queue.
Assume you have Research, Marketing and Finance teams funding 60%, 30%
and 10% respectively of your Hadoop Cluster. How will you assign only 60% of
cluster resources to Research, 30% to Marketing and 10% to Finance during
peak load?
Illustration
Below properties will be defined in capacity-scheduler.xml
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>research,marketing,finance</value>
</property>
<property>
<name>yarn.scheduler.capacity.research.capacity</name>
<value>60</value>
</property>
<property>
<name>yarn.scheduler.capacity.research.capacity</name>
<value>30</value>
</property>
<property>
<name>yarn.scheduler.capacity.research.capacity</name>
<value>10</value>
</property>
How do you benchmark your Hadoop cluster with tools that
come with Hadoop?
TestDFSIO
NNBench
MRBench is a test for the MapReduce layer. It loops a small MapReduce job for
specific number of times and checks the responsiveness and efficiency of the cluster.
Illustration
TestDFSIO write test with 100 files and file size of 100 MB each.
TestDFSIO read test with 100 files and file size of 100 MB each.
NNBench test that creates 1000 files using 12 maps and 6 reducers.
Pig collects all of the records for a given key together on a single
reducer.
In many data sets, there are a few keys that have three or more orders of
magnitude more records than other keys.
This results in one or two reducers that will take much longer than the
rest. To deal with this, Pig provides skew join.
Assume you are doing a join and you notice that all but one
reducer is running for a long time how do you address the
problem in Pig?
In the first MapReduce job pig scans the second input and identifies keys
that have so many records.
For all except the records with the key(s) identified from the first job, pig
would do a standard join.
For the records with keys identified by the second job, bases on how many
records were seen for a given key, those records will be split across
appropriate number of reducers.
Assume you are doing a join and you notice that all but one
reducer is running for a long time how do you address the
problem in Pig?
The other input to the join that is not split, only the keys in question are
then split and then replicated to each reducer that contains that key
Illustration
jnd = join cinfo by city, users by city using ‘skewed';
What is the difference between SORT BY and ORDER BY in
Hive?
SORT BY orders the data only within each reducer, thereby performing
a local ordering, where each reducer’s output will be sorted. You will
not achieve a total ordering on the dataset. Better performance is
traded for total ordering.
Assume you have a sales table in a company and it has sales entries from
salesman around the globe. How do you rank each salesperson by country
based on their sales volume in Hive?
Hive support several analytic functions and one of the functions is
RANK() and it is designed to do this operation.
Illustration
When a task completes successfully all the duplicate tasks that are
running will be killed. So if the original task completes before the
speculative task, then the speculative task is killed; on the other hand,
if the speculative task finishes first, then the original is killed.
What is the benefit of using counters in Hadoop?
Counters are a useful for gathering statistics about the job. Assume you
have a 100 node cluster and a job with 100 mappers is running in the
cluster on 100 different nodes.
Lets say you would like to know each time you see a invalid record in
your Map phase. You could add a log message in your Mapper so that
each time you see an invalid line you can make an entry in the log.
But consolidating all the log messages from 100 different nodes will be
time consuming. You can use a counter instead and increment the
value of the counter every time you see an invalid record.
The nice thing about using counters is that is gives you a consolidate
value for the whole job rather than showing 100 separate outputs.
What is the difference between an InputSplit and a
Block?
Illustration
hdfs fsck /dir/hadoop-test -files -blocks -locations
What are the parameters of mappers and reducers
function?
Map and Reduce method signature tells you a lot about the type of input
and ouput your Job will deal with.
Assuming you are using TextInputFormat, Map function’s parameters could look like –
Illustration
A RecordReader uses the data within the boundaries created by the input
split to generate key/value pairs. Each of the generated Key/value pair will
be sent one by one to their mapper.
What is a sequence file in Hadoop?
Sequence files support splitting even when the data inside the file is
compressed which is not possible with a regular compressed file.
Or you can also choose to choose at the block level where multiple
records will be compressed together.
100+ REAL Hadoop Interview Questions from REAL interviews
WWW.BIGDATAINTERVIEWQUESTIONS.COM
[email protected]