2013年8月4日星期日

Hadoop solutions to common errors

 

1. through commands and viewing log files to hadoop startup and operation

 

In NameNode side, you can

 
  
tail -100 /var/log/hadoop/hadoop/hadoop-hadoop-namenode-hadoop-namenode.log
 
 

View NameNode running log

 

 

can also end in DataNode

 
  
cat /var/log/hadoop/hadoop/hadoop-hadoop-datanode-hadoop-datanode1.log
 
 

View DataNode running log.

 

 

through jps commands are run in the datanode and namenode side, view your active service.

 

 

2. NameNode not start:

 

Cannot lock storage ...... tmp / dfs / name. The directory is already locked.

 

perhaps because run hadoop account for this folder tmp / dfs / name without permission. You can use the following command to solve

 
  
chown -R hadoop:hadoop /usr/hadoop
 
 

 

3. DataNode not start :

 

in the client log shows namenode namespaceID = 1713611278; datanode namespaceID = 596511341

 

This problem is basically because in the end times namenode run hadoop namenode-format caused. The core-site.xml in hadoop file (different hadoop versions have different names) found hadoop.tmp.dir , empty the corresponding folder. For example:

 
  
[hadoop@hadoop-datanode1 hadoop]$ cat core-site.xml 
<
?xml version="1.0"?>
<
?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<
!-- Put site-specific property overrides in this file. -->

<configuration
>
<
!--global properties -->
<property
>
<name
>hadoop.tmp.dir</name>
<value
>/usr/hadoop/tmp</value>
<
/property>
 
 

Empty

 
  
[hadoop@hadoop-datanode1 tmp]$ rm -rf /usr/hadoop/tmp/*
 
 

then restart hadoop, in datanode side with jps see if datanode already started.

 

 

4. wordcount program run fs can not find the folder:

 

Input path does not exist: hdfs :/ / localhost: 9000/user/input

 

in a clustered environment, the files are processed in hdfs, so must be processed hadoop files to a folder. The following example, a new folder in the fs, well in advance of the band hdfs wordcount file copy, and finally run the program.

 
  
[hadoop@hadoop-namenode ~]$ hadoop fs -mkdir /tmp/wordcount/input 
[hadoop
@hadoop-namenode ~]$ hadoop fs -put /home/hadoop/wordcount/input /tmp/wordcount/input
[hadoop
@hadoop-namenode ~]$ hadoop fs -ls /tmp/wordcount/input

hadoop jar
/home/hadoop/hadoop-examples-1.1.2.jar wordcount /tmp/wordcount/input/input /tmp/wordcount/output
 
 

 

View Results

 
  
hadoop fs -cat /tmp/wordcount/output/part-r-00000
 

1 条评论: