2013年7月28日星期日

Hadoop can not start JobTracker and TaskTracker

My conf is set in 3 computers
192.168.0.1 NameNode
192.168.0.2 SecondaryNameNode, Jobtracker
192.168.0.100 DataNode, TaskTracker

configuration file is as follows:
core-site.xml:

fs.default.name <>
http://localhost:9000/ <>
<>


hadoop.tmp.dir <>
/ home / smile / workspace / hadoop / namenode-tmp <>
<>

hdfs-site.xml:

dfs.name.dir <>
/ home / smile / workspace / hadoop / namenode-log <>
<>
of a DataNode where it should store its blocks ->

dfs.data.dir <>
/ home/smile/workspace/hadoop/hadoop-data/hadoop-data1,
/ home/smile/workspace/hadoop/hadoop-data/hadoop-data2 ,
/ home/smile/workspace/hadoop/hadoop-data/hadoop-data3 < ;/ value>
<>


dfs.replication <>
3 <>
<>

mapred-site.xml:

mapred.job.tracker <>
192.168.0.2:9001 <>
<>


mapred.system.dir <>
/ home / smile / workspace / hadoop / hadoop-mapred / system / <>
<>


mapred.local.dir <>
/ home/smile/workspace/hadoop/hadoop-mapred/mapred-tmp1,
/ home/smile/workspace/hadoop/hadoop-mapred/mapred-tmp2 ,
/ home/smile/workspace/hadoop/hadoop-mapred/mapred-tmp3 < ;/ value>
<>


mapred.tasktracker.map.tasks.maximum <>
2 <>
<>

mapred.tasktracker.reduce.tasks.maximum <>
2 <>
<>

which started using the start-all.sh time, always can not start Jobtracker and TaskTracker.
logfile shows
java.net.BindException: Cannot assign requested address

should ask how to configure it JobTracker there?

PS:
I tried to mapred.job.tracker to localhost: 9001, JobTracker is able to start, but I still can not start TaskTracker,
logfile error
problem cleaning system directory: null
java.io.IOException: No FileSystem for scheme: http
at org.apache.hadoop.fs.FileSystem.createFileSyste
Is that why?
------ Solution ---------------------------------------- ----
core-site.xml:

fs.default.name <>
hdfs: <> <> / / localhost: 9000 / <>
<>


hadoop.tmp.dir <>
/ home / smile / workspace / hadoop / namenode-tmp <>
<>


------ Solution ------------------------------------ --------
mapred.job.tracker <>
192.168.0.2:9001 <>
<>

fs.default.name <>
http://localhost:9000/ <>
<>
see you configure what is this, completely wrong, http://localhost:9000 since you Xiangnong clusters, engage in what localhost, or http, I do not say, mapred.job.tracker, nor
------ Solution ------------------------------------------ -
fs.default.name <>
http://localhost:9000/ <>
The use namenode's ip not use localhost
------ Solution --------------------------- -----------------
put the following into your namenode node1 hostname of the machine. Between machines in your / etc / hosts to save all machines hostname and ip relationships.

fs.default.name <>
hdfs :/ / node1 <>
<>


mapred.job.tracker <>
node1: 54311 <>
<>

------ For reference only ---------------------------------- -----
slaves file configuration yet
------ For reference only ------------------------- --------------
addition, your namenode and datanode between, ssh without password did you? Testing hadoop user is ssh without password login.
------ For reference only -------------------------------------- -
are configured, or not ~
------ For reference only ------------------------ ---------------
I told you encounter the same problem, seeking recovery, seeking answers
------ For reference only ------ ---------------------------------
it should be how to do it?
------ For reference only -------------------------------------- -
I also get this thing, you have configured wrong, please check back 4th floor, red part
------ For reference only -------------- -------------------------
your configuration is indeed wrong, and you are distributed rather than the stand-alone version on Do not use localhost online with detailed configuration information
------ For reference only -------------------------- -------------
amount. . . With demand answers
------ For reference only ----------------------------------- ----
fs.default.name <>
hdfs :/ / node1 <>

Here hdfs :/ / node1 entire string will be shared by all the nodes in the cluster, written localHost, then get the result is not the same on different machines.
just in case of an error, it is best to write all inside the same host machine's IP host name configuration
192.168.1.1 master
....
192.168.1.14 slave14

Then all configurations use these host names.


because even using hdfs :/ / 192.168.xx.xx IP configuration can also cause such a variety of unpredictable errors, such as mapreduce appear stuck, such as occurs on the client can not find the host.
------ For reference only -------------------------------------- -
core-site.xml configuration property values ??in fs.default.name wrong, details refer to the official documentation!
------ For reference only -------------------------------------- -
can look at the log file. There should be an error prompt.
------ For reference only -------------------------------------- -
Hadoop can not start JobTracker and TaskTracker, but the error is the need to delete the following files {mapred.system.dir}, but there is no mapred-site.xml in this catalog. . . . Seek expert help.

1 条评论: