Hbaseregionserver逐个挂掉的问题分析
最近遇到一个比较诡异的问题,一个regionserver由于GC的原因,导致与zookeeper链接超时,最终被踢出集群。但是,接下来的现象,才是噩梦的开始!!!
创新互联公司是一家集成都网站设计、成都网站制作、外贸网站建设、网站页面设计、网站优化SEO优化为一体的专业的建站公司,已为成都等多地近百家企业提供网站建设服务。追求良好的浏览体验,以探求精品塑造与理念升华,设计最适合用户的网站页面。 合作只是第一步,服务才是根本,我们始终坚持讲诚信,负责任的原则,为您进行细心、贴心、认真的服务,与众多客户在蓬勃发展的市场环境中,互促共生。
1. 一个regionserver由于GC的原因,导致与zookeeper链接超时,最终被踢出集群。
~~~Hbase regionserver log~~~ 2018-05-31 11:42:17,739 INFO [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for cn_kong_groups,\x00\x00\xBB\xE9\x03\x03\x00D\xDF,1527701650816.a177e358544ffe3157a4c0531feb8e5a., current region memstore size 123.40 MB, and 1/1 column families' memstores are being flushed. 2018-05-31 11:42:17,740 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 52612ms GC pool 'ParNew' had collection(s): count=1 time=45897ms GC pool 'ConcurrentMarkSweep' had collection(s): count=2 time=6814ms 2018-05-31 11:42:17,741 WARN [B.defaultRpcServer.handler=0,queue=0,port=16020] ipc.RpcServer: (responseTooSlow): {"processingtimems":52721,"call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)","client":"10.21.23.232:55676","starttimems":1527738085020,"queuetimems":0,"class":"HRegionServer","responsesize":15,"method":"Scan"} 2018-05-31 11:42:17,745 INFO [regionserver/regionserver1.bigdata.com/172.16.11.66:16020-SendThread(ip-10-21-14-154.bigdata.com:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 61597ms for sessionid 0x15f3454790276db, closing socket connection and attempting reconnect 2018-05-31 11:42:17,745 INFO [main-SendThread(ip-10-21-14-154.bigdata.com:2181)] zookeeper.ClientCnxn: Client session timed out, have not heard from server in 61595ms for sessionid 0x15f3454790276da, closing socket connection and attempting reconnect 2018-05-31 11:42:17,893 INFO [sync.3] wal.FSHLog: Slow sync cost: 148 ms, current pipeline: [DatanodeInfoWithStorage[172.16.11.66:50010,DS-ac448b6f-2964-4900-aeda-4547f2d956b8,DISK], DatanodeInfoWithStorage[172.16.11.67:50010,DS-e8b1727a-81d5-4b65-9854-6b6ad6749b64,DISK], DatanodeInfoWithStorage[10.21.23.41:50010,DS-56eb0e28-5b3c-4047-acc3-d79f3f6bacc2,DISK]] 2018-05-31 11:42:17,893 INFO [sync.2] wal.FSHLog: Slow sync cost: 150 ms, current pipeline: [DatanodeInfoWithStorage[172.16.11.66:50010,DS-ac448b6f-2964-4900-aeda-4547f2d956b8,DISK], DatanodeInfoWithStorage[172.16.11.67:50010,DS-e8b1727a-81d5-4b65-9854-6b6ad6749b64,DISK], DatanodeInfoWithStorage[10.21.23.41:50010,DS-56eb0e28-5b3c-4047-acc3-d79f3f6bacc2,DISK]]
2. Full GC的问题,抛开先不管,一个诡异的问题出现了,启动这个regionserver之后,不长时间这个regionserver又挂掉了。而且,陆续有其他regionserver开始挂掉。
2018-05-31 12:20:12,919 ERROR [RS_OPEN_REGION-regionserver1:16020-2] coprocessor.CoprocessorHost: The coprocessor org.apache.dfs.storage.hbase.cube.v1.coprocessor.observer.AggregateRegionObserver threw org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3856) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1006) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) 2018-05-31 12:20:12,923 ERROR [RS_OPEN_REGION-regionserver1:16020-2] regionserver.HRegionServer: ABORTING region server regionserver1.bigdata.com,16020,1527740222472: The coprocessor org.apache.dfs.storage.hbase.cube.v1.coprocessor.observer.AggregateRegionObserver threw org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
3. 下面是另一个regionserver的日志,都可以看到"Operation category READ is not supported in state standby",然后"ABORTING region server"
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3856) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1006) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) 2018-05-31 12:20:13,678 ERROR [RS_OPEN_REGION-regionserver2:16020-2] regionserver.HRegionServer: ABORTING region server regionserver2.bigdata.com,16020,1519906845598: The coprocessor
4. 为什么会出现这个情况呢?感觉十分诡异。
最后经过查找,并且有老司机的指引,发现了问题点。部分hbase表,使用了coprocessor功能,而这个也不是关键
hbase 表的信息如下,注意这部分:hdfs://dfs/dfs/dfs_metadata/coprocessor/*****
dfs_ZZQSWFZ4VW', {TABLE_ATTRIBUTES => {coprocessor$1 => 'hdfs://dfs/dfs/dfs_metadata/coprocessor/dfs-coprocessor-1.5.4.1-0.jar|org.apache.dfs.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService|1001|', coprocessor$2 => 'hdfs://dfs/dfs/dfs_metadata/coprocessor/dfs-coprocessor-1.5.4.1-0.jar|org.apache.dfs.storage.hbase.cube.v1.coprocessor.observer.AggregateRegionObserver|1002|', METADATA => {'CREATION_TIME' => '1478338090339', 'GIT_COMMIT' => 'c4e31c1b3a664f598352061ae8703812e9d9cef7;', 'dfs_HOST' => 'dfs_metadata', 'OWNER' => 'xxxx.owner@bigdata.com', 'SEGMENT' => 'WindGreenwichOffline_dev[20161105070130_20161105092132]', 'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy', 'USER' => 'ADMIN'}}, {NAME => 'F1', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'NONE', COMPRESSION => 'SNAPPY'}
5. 关键点来了,当前的hdfs namenode是HA模式的,但是这个集群,最早是单点namenode的,而当时创建的一些,使用了coprocessor的hbase表,是问题的触发点。
这些coprocessor,指定了hdfs的访问名称,所以,只有原来的namenode是active的状态,才能正常访问。一旦active-standby发生切换,这部分表,就无法正常加载与访问。最终导致整个regionserver的异常。
hbase 表的信息如下,注意这部分:hdfs://old_namenode_host.bigdata.com:9000/dfs/dfs_metadata/***** 这里使用了具体的主机名
'dfs_1KT8V5FL1C', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint|107374663|', coprocessor$2 => 'hdfs://old_namenode_host.bigdata.com:9000/dfs/dfs_metadata/coprocessor/dfs-coprocessor-1.5.4.1-1.jar|org.apache.dfs.storage.hbase.cube.v2.coprocessor.endpoint.CubeVisitService|1001|', coprocessor$3 => 'hdfs://old_namenode_host.bigdata.com:9000/dfs/dfs_metadata/coprocessor/dfs-coprocessor-1.5.4.1-1.jar|org.apache.dfs.storage.hbase.cube.v1.coprocessor.observer.AggregateRegionObserver|1002|', METADATA => {'CREATION_TIME' => '1493632152602', 'GIT_COMMIT' => 'c4e31c1b3a664f598352061ae8703812e9d9cef7;', 'dfs_HOST' => 'dfs_metadata', 'OWNER' => 'xxxx.owner@bigdata.com', 'SEGMENT' => 'WindCGN2_clone[20160501000000_20170501000000]', 'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy', 'USER' => 'ADMIN'}}, {NAME => 'F1', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'NONE', COMPRESSION => 'SNAPPY'}
最后总结,为什么会其他的regionserver逐个挂掉呢?为什么namenode已经切换很久,在GC引发regionserver挂掉,才会导致regionserver逐个挂掉的连锁反应?
1. GC引起第一个regionserver挂掉,而其上的region,一定会被master分配到其他的regionserver上,而其他的regionserver也不能正确接管这个region,所以也是以相同的错误,导致第二个regionserver挂掉,然后第三个。理论上,全部的regionserver都会挂掉。
2. 由于在第一个regionserver挂掉前,namenode active-standby发生切换之前,hbase regionserver已经启动,并且能正常识别region,所以没有引发问题,但是regionserver发生重启,就肯定不能正常挂载这个region的。
关于coprocessor功能,请参考
http://www.zhyea.com/2017/04/13/using-hbase-coprocessor.html
https://www.3pillarglobal.com/insights/hbase-coprocessors
分享文章:Hbaseregionserver逐个挂掉的问题分析
本文URL:http://ybzwz.com/article/jsdhoo.html