2013年7月26日星期五

Using bulkload generates HFile file, reported connection exception

When generating 100 data can be successfully generated HFile, but when the data is imported 100 million cluster reported the following error:

13/05/03 20:10:09 INFO mapred.JobClient: Task Id: attempt_201305031451_0029_m_000058_2, Status: FAILED
java.lang.RuntimeException: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3465b738 closed
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 200)
at org.apache.hadoop.util.ReflectionUtils.setConf (ReflectionUtils.java: 62)
at org.apache.hadoop.util.ReflectionUtils.newInstance (ReflectionUtils.java: 117)
at org.apache.hadoop.mapred.Task.initialize (Task.java: 513)
at org.apache.hadoop.mapred.MapTask.run (MapTask.java: 353)
at org.apache.hadoop.mapred.Child $ 4.run (Child.java: 255)
at java.security.AccessController.doPrivileged (Native Method)
at javax.security.auth.Subject.doAs (Subject.java: 396)
at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1122)
at org.apache.hadoop.mapred.Child.main (Child.java: 249)
Caused by: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3465b738 closed
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 795)
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 783)
at org.apache.hadoop.hbase.client.HTable.finishSetup (HTable.java: 247)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 211)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 169)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 195)
... 9 more

13/05/03 20:10:44 INFO mapred.JobClient: map 2% reduce 0%
13/05/03 20:11:27 INFO mapred.JobClient: Task Id: attempt_201305031451_0029_m_000073_0, Status: FAILED
java.lang.RuntimeException: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3bfa681c closed
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 200)
at org.apache.hadoop.util.ReflectionUtils.setConf (ReflectionUtils.java: 62)
at org.apache.hadoop.util.ReflectionUtils.newInstance (ReflectionUtils.java: 117)
at org.apache.hadoop.mapred.Task.initialize (Task.java: 513)
at org.apache.hadoop.mapred.MapTask.run (MapTask.java: 353)
at org.apache.hadoop.mapred.Child $ 4.run (Child.java: 255)
at java.security.AccessController.doPrivileged (Native Method)
at javax.security.auth.Subject.doAs (Subject.java: 396)
at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1122)
at org.apache.hadoop.mapred.Child.main (Child.java: 249)
Caused by: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3bfa681c closed
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 795)
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 783)
at org.apache.hadoop.hbase.client.HTable.finishSetup (HTable.java: 247)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 211)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 169)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 195)
... 9 more

13/05/03 20:11:33 INFO mapred.JobClient: Job complete: job_201305031451_0029
13/05/03 20:11:33 INFO mapred.JobClient: Counters: 8
13/05/03 20:11:33 INFO mapred.JobClient: Job Counters
13/05/03 20:11:33 INFO mapred.JobClient: SLOTS_MILLIS_MAPS = 12013269
13/05/03 20:11:33 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms) = 0
13/05/03 20:11:33 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms) = 0
13/05/03 20:11:33 INFO mapred.JobClient: Rack-local map tasks = 77
13/05/03 20:11:33 INFO mapred.JobClient: Launched map tasks = 187
13/05/03 20:11:33 INFO mapred.JobClient: Data-local map tasks = 110
13/05/03 20:11:33 INFO mapred.JobClient: Failed map tasks = 1
13/05/03 20:11:33 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES = 8995

My zookeeper.session.timeout = 180000.
seeking heroes, grateful.
------ Solution ---------------------------------------- ----
bulkload two steps importtsv generate HFILE, completebulkload into HBASE
you this error is what step?
------ Solution ---------------------------------------- ----
changing for the better, you have three machines configuration synchronization before?

Also, you / mnt/disk01/hadoop/zookeeper/myid configured yet?

The three machines / etc / hosts stickers out
------ Solution ----------------------- ---------------------
zoo.cfg
server.1 = node01.sdass.com: 2888:3888
server.2 = node02.sdass.com: 2888:3888
server.3 = node03.sdass.com: 2888:3888

configuration hbase-site.xml
......
hbase.zookeeper.quorum
node01.sdass.com, node02.sdass.com, node03.sdass.com

modify hbase-env.sh
export HBASE_MANAGES_ZK = true

Close HBASE, close ZOOKEEPER
then ran START-HBASE.SH

If there is not, then check your HDFS configuration
------ Solution --------------------------- -----------------
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server / 127.0.0.1:2181

where you are not with the wrong, how would go to call LOCALHOST?

------ For reference only ---------------------------------- -----
step importtsv HFILE generated when the package at the wrong initial estimate is now zookeeper communication caused by the specific circumstances do not know why?
------ For reference only -------------------------------------- -
not yet begun as HBASE, attention
------ For reference only --------------------------- ------------


posted a look at the configuration sprinkle
------ For reference only --------------------------- ------------
========================== zoo.cfg ====== ======================
# The number of milliseconds of each tick
tickTime = 2000
# The number of ticks that the initial
# synchronization phase can take
initLimit = 10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit = 5
# the directory where the snapshot is stored.
dataDir = / mnt/disk01/hadoop/zookeeper
# the port at which the clients will connect
clientPort = 2181
server.1 = node01.sdass.com: 2888:3888
server.2 = node02.sdass.com: 2888:3888
server.3 = node03.sdass.com: 2888:3888

============================ hbase-site.xml ============= ===============


hbase.rootdir
hdfs :/ / node01: 8020/hbase/data



hbase.cluster.distributed
true



hbase.tmp.dir
/ var / log / hbase
Temporary directory on the local filesystem.
Change this setting to point to a location more permanent
than '/ tmp' (The '/ tmp' directory is often cleared on
machine restart).



hbase.master.info.bindAddress
node01



hbase.regionserver.global.memstore.upperLimit
0.4
Maximum size of all memstores in a region server before new
updates are blocked and flushes are forced. Defaults to 40% of heap



hbase.regionserver.handler.count
30



hbase.hregion.majorcompaction
86400000



hbase.regionserver.global.memstore.lowerLimit
0.35



hbase.hregion.memstore.block.multiplier
2



hbase.hregion.memstore.flush.size
134217728



hbase.hregion.memstore.mslab.enabled
true



hbase.hregion.max.filesize
1073741824



hbase.client.scanner.caching
100



zookeeper.session.timeout
180000



hbase.client.keyvalue.maxsize
10485760



hbase.hstore.compactionThreshold
3



hbase.hstore.blockingStoreFiles
7



hfile.block.cache.size
0.25



hbase.superuser
hbase




hbase.coprocessor.region.classes

A comma-separated list of Coprocessors that are loaded by
default on all tables. For any override coprocessor method, these classes
will be called in order. After implementing your own Coprocessor, just put
it in HBase's classpath and add the fully qualified class name here.
A coprocessor can also be loaded on demand by setting HTableDescriptor.




hbase.coprocessor.master.classes

A comma-separated list of
org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are
loaded by default on the active HMaster process. For any implemented
coprocessor methods, the listed classes will be called in order. After
implementing your own MasterObserver, just put it in HBase's classpath
and add the fully qualified class name here.



The following three properties are used together to create the list of
host: peer_port: leader_port quorum servers for ZooKeeper.
->

hbase.zookeeper.quorum
node01, node02, node03
Comma separated list of servers in the ZooKeeper Quorum.
For example, "host1.mydomain.com, host2.mydomain.com, host3.mydomain.com".
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start / stop ZooKeeper on.





dfs.support.append
true
Does HDFS allow appends to files?
This is an hdfs config. set in here so the hdfs client will do append support.
You must ensure that this config. is true serverside too when running hbase
(You will have to restart your cluster after setting it).




dfs.client.read.shortcircuit
true
Enable / Disable short circuit read for your client.
Hadoop servers should be configured to allow short circuit read
for the hbase user for this to take effect




dfs.client.read.shortcircuit.skip.checksum
false
Enable / disbale skipping the checksum check


------ For reference only ----------------------------- ----------
level is limited, do most of the day, have not done it, experts hope to solve!
------ For reference only -------------------------------------- -


Your zoo.cof the SERVER is not wrong?
server.1 = node01.sdass.com : 2888:3888
server.2 = node02.sdass.com : 2888:3888
server.3 = node03.sdass.com : 2888:3888

hbase.zookeeper.quorum
node01, node02, node03

two is not the same ~ ~
------ For reference only ---------------------------- -----------
have changed, or not
------ For reference only ------------------ ---------------------
configuration is synchronized
myid configured
node01 as a
node02 2
node03 3
======================== / etc / hosts ================== ====
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 node01.sdass.com
192.168.1.12 node02.sdass.com
192.168.1.13 node03.sdass.com
------ For reference only -------------------------- -------------
as you say re-tried, or not
------ For reference only ----------- ----------------------------
hold what is wrong? Is not the same as the original
the error of TASKID under this node TASK LOG stickers

reference Task Id: attempt_201305031451_0029_m_000058_2 , Status
------ For reference only -------------------------- -------------
just performed a bit importtsv, given:
===================================== client error ====== ================================
13/05/08 17:26:59 INFO mapred.JobClient: Task Id: attempt_201305081155_0005_m_000014_2, Status: FAILED
java.lang.RuntimeException: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3bfa681c closed
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 200)
at org.apache.hadoop.util.ReflectionUtils.setConf (ReflectionUtils.java: 62)
at org.apache.hadoop.util.ReflectionUtils.newInstance (ReflectionUtils.java: 117)
at org.apache.hadoop.mapred.Task.initialize (Task.java: 513)
at org.apache.hadoop.mapred.MapTask.run (MapTask.java: 353)
at org.apache.hadoop.mapred.Child $ 4.run (Child.java: 255)
at java.security.AccessController.doPrivileged (Native Method)
at javax.security.auth.Subject.doAs (Subject.java: 396)
at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1122)
at org.apache.hadoop.mapred.Child.main (Child.java: 249)
Caused by: java.io.IOException: org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation @ 3bfa681c closed
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 795)
at org.apache.hadoop.hbase.client.HConnectionManager $ HConnectionImplementation.locateRegion (HConnectionManager.java: 783)
at org.apache.hadoop.hbase.client.HTable.finishSetup (HTable.java: 247)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 211)
at org.apache.hadoop.hbase.client.HTable. (HTable.java: 169)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf (TableOutputFormat.java: 195)
... 9 more
------ For reference only -------------------------- -------------
corresponding taskID error log is as follows:
================================ attempt_201305081155_0005_m_000014_1 ============= =============
2013-05-08 17:23:55,517 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-05-08 17:23:57,750 INFO org.apache.zookeeper.ZooKeeper: Client environment: zookeeper.version = 3.3.4 - 1, built on 07/27/2012 12:15 GMT
2013-05-08 17:23:57,750 INFO org.apache.zookeeper.ZooKeeper: Client environment: host.name = node05.sdass.com
2013-05-08 17:23:57,750 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.version = 1.6.0_26
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.vendor = Sun Microsystems Inc.
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.home = / usr/jdk64/jdk1.6.0_26/jre
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client
... / mnt/disk01/hadoop/mapred/taskTracker/root/distcache/-9063195659308300668_-861378730_57937419/node01.sdass.com/user/root/.staging/job_201305081155_0005/libjars/guava-r09. jar :/ mnt/disk02/hadoop/mapred/taskTracker/root/jobcache/job_201305081155_0005/attempt_201305081155_0005_m_000070_0/work
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.library.path = / usr / lib / hadoop / libexec / .. / lib/native/Linux-amd64-64 :/ mnt/disk02/hadoop/mapred/taskTracker/root/jobcache/job_201305081155_0005/attempt_201305081155_0005_m_000070_0/work
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.io.tmpdir = / mnt/disk12/hadoop/mapred/taskTracker / root/jobcache/job_201305081155_0005/attempt_201305081155_0005_m_000070_0/work/tmp
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: java.compiler =
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: os.name = Linux
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: os.arch = amd64
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: os.version = 2.6.32-131.0.15.el6.x86_64
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: user.name = root
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: user.home = / root
2013-05-08 17:23:57,751 INFO org.apache.zookeeper.ZooKeeper: Client environment: user.dir = / mnt/disk12/hadoop/mapred/taskTracker/root / jobcache/job_201305081155_0005/attempt_201305081155_0005_m_000070_0/work
2013-05-08 17:23:57,761 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString = localhost: 2181 sessionTimeout = 180000 watcher = hconnection
2013-05-08 17:23:57,818 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server / 127.0.0.1:2181
2013-05-08 17:23:57,819 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of this process is 25017@node05.sdass . com
2013-05-08 17:23:57,853 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate login configuration occurred ; when trying to find JAAS configuration.
2013-05-08 17:23:57,853 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration ; section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2013-05-08 17:23:57,855 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket ; connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect (Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect (SocketChannelImpl.java: 567)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport (ClientCnxnSocketNIO.java: 286)
at org.apache.zookeeper.ClientCnxn $ SendThread.run (ClientCnxn.java: 1035)
2013-05-08 17:23:57,972 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server / 127.0.0.1:2181
2013-05-08 17:23:57,973 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate login configuration occurred ; when trying to find JAAS configuration.
2013-05-08 17:23:57,973 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration ; section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2013-05-08 17:23:57,974 WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket ; connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect (Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect (SocketChannelImpl.java: 567)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport (ClientCnxnSocketNIO.java: 286)

没有评论:

发表评论