192.168.0.1 NameNode
192.168.0.2 SecondaryNameNode, Jobtracker
192.168.0.100 DataNode, TaskTracker
configuration file is as follows:
core-site.xml:
<>
<>
hdfs-site.xml:
<>
of a DataNode where it should store its blocks ->
/ home/smile/workspace/hadoop/hadoop-data/hadoop-data2 ,
/ home/smile/workspace/hadoop/hadoop-data/hadoop-data3 < ;/ value>
<>
<>
mapred-site.xml:
<>
<>
/ home/smile/workspace/hadoop/hadoop-mapred/mapred-tmp2 ,
/ home/smile/workspace/hadoop/hadoop-mapred/mapred-tmp3 < ;/ value>
<>
<>
<>
which started using the start-all.sh time, always can not start Jobtracker and TaskTracker.
logfile shows
java.net.BindException: Cannot assign requested address
should ask how to configure it JobTracker there?
PS:
I tried to mapred.job.tracker to localhost: 9001, JobTracker is able to start, but I still can not start TaskTracker,
logfile error
problem cleaning system directory: null
java.io.IOException: No FileSystem for scheme: http
at org.apache.hadoop.fs.FileSystem.createFileSyste
Is that why?
------ Solution ---------------------------------------- ----
core-site.xml:
<>
<>
------ Solution ------------------------------------ --------
<>
<>
see you configure what is this, completely wrong, http://localhost:9000 since you Xiangnong clusters, engage in what localhost, or http, I do not say, mapred.job.tracker, nor
------ Solution ------------------------------------------ -
The use namenode's ip not use localhost
------ Solution --------------------------- -----------------
put the following into your namenode node1 hostname of the machine. Between machines in your / etc / hosts to save all machines hostname and ip relationships.
<>
<>
------ For reference only ---------------------------------- -----
slaves file configuration yet
------ For reference only ------------------------- --------------
addition, your namenode and datanode between, ssh without password did you? Testing hadoop user is ssh without password login.
------ For reference only -------------------------------------- -
are configured, or not ~
------ For reference only ------------------------ ---------------
I told you encounter the same problem, seeking recovery, seeking answers
------ For reference only ------ ---------------------------------
it should be how to do it?
------ For reference only -------------------------------------- -
I also get this thing, you have configured wrong, please check back 4th floor, red part
------ For reference only -------------- -------------------------
your configuration is indeed wrong, and you are distributed rather than the stand-alone version on Do not use localhost online with detailed configuration information
------ For reference only -------------------------- -------------
amount. . . With demand answers
------ For reference only ----------------------------------- ----
Here hdfs :/ / node1 entire string will be shared by all the nodes in the cluster, written localHost, then get the result is not the same on different machines.
just in case of an error, it is best to write all inside the same host machine's IP host name configuration
192.168.1.1 master
....
192.168.1.14 slave14
Then all configurations use these host names.
because even using hdfs :/ / 192.168.xx.xx IP configuration can also cause such a variety of unpredictable errors, such as mapreduce appear stuck, such as occurs on the client can not find the host.
------ For reference only -------------------------------------- -
core-site.xml configuration property values ??in fs.default.name wrong, details refer to the official documentation!
------ For reference only -------------------------------------- -
can look at the log file. There should be an error prompt.
------ For reference only -------------------------------------- -
Hadoop can not start JobTracker and TaskTracker, but the error is the need to delete the following files {mapred.system.dir}, but there is no mapred-site.xml in this catalog. . . . Seek expert help.
Hard to say what causes the problem.
回复删除