使用java客户端连接到安全的Hbase



我正在尝试使用kerberos连接一个安全的hbase。它是一个部署到hdp3集群中的hbase。确切地说,我正试图从集群之外的主机使用java客户端进行访问。

这是我的代码:

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
System.setProperty("sun.security.krb5.debug", "true");
System.setProperty("java.security.debug", "gssloginconfig,configfile,configparser,logincontext");
System.setProperty("java.security.auth.login.config", "hbase.conf");
Configuration conf = HBaseConfiguration.create();
String principal="user@REALM";
File keytab = new File("/home/user/user.keytab");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab.getAbsolutePath());
ugi.doAs(new PrivilegedAction<Void>() {
@Override
public Void run() {
try {
TableName tableName = TableName.valueOf("some_table");
final Connection conn = ConnectionFactory.createConnection(conf);
System.out.println(" go ");
Table table = conn.getTable(tableName);
Result r = table.get(new Get(Bytes.toBytes("some_key")));
System.out.println(r);
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
});
}

这是我的jaas文件conf:

Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=false
keyTab="/home/user/user.keytab"
principal="user@REALM";
};

所有zookeeper和其他配置都来自ambari提供的hbase-site.xml文件。

我没有错误,只是客户端进入了一个无限循环,跟踪如下:

ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1-SendThread(node4:2181(]DEBUG org.apache.zookeeper.ClientCnxn-正在读取回复会话ID:0x371f62d9b230031,数据包::client路径:/hbase-secure/meta region server服务器路径:/hbas-secure/meta-reregion server finished:false header::141,4 replyHeader::141365072222881,0 request::'/hbase-sesecure/meta region server,F响应::#ffffffff 000146d61737456723a313630303019fffffff6fffff864dffffff99ffffffff85151c50425546a1a56e6f64653410fffffff18ffffffb0ffffffffa6fffffff2e100183,s{3650722209633650722220741589733982271859014218472,5,0,0,0,52,0365072220963}[ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1 SendThread(node4:2181(]调试org.apache.zookeeper.ClientCnxn-读取回复会话ID:0x371f62d9b230031,数据包::client路径:/hbase安全/元区域服务器路径:/hbase-secure/元区域服务器已完成:false标头::142,4 replyHeader::142365072222881,0请求:'/hbase安全/meta区域服务器,F响应::#ffffffff 000146d61737456723a313630303019fffffffff6fffffff1864dffffff99ffffff85151c50425546a1a1a56e6f64653410ffffffFF947d18ffffffb0ffffffffa6ffffFF81ffffffc5ffffff9f2e100183,s{3650722209633650722220741589733982271859014218472,5,0,0,0,52,0365072220963}[ReadOnlyZKClient-node2:2181,node3:2181,node4:2181@0x50ad3bc1 SendThread(node4:2181(]调试org.apache.zookeeper.ClientCnxn-读取回复会话ID:0x371f62d9b230031,数据包::client路径:/hbase安全/元区域服务器路径:/hbase-secure/元区域服务器已完成:false标头::143,4 replyHeader::143365072222881,0请求:'/hbase安全/meta区域服务器,F响应::#ffffffff 000146d61737456723a313630303019fffffffff6fffffff1864dffffff99ffffff85151c50425546a1a56e6f64653410ffffffFF947d18ffffffb0ffffffffa6ffffFF81ffffffc5ffffff9f2e100183,s{3650722209633650722220741589733982271589014218472,5,0,0,52,0365072220963}

编辑

好的,我得到了这个错误,只是我没有等待足够的时间:

Exception in thread "main" java.net.SocketTimeoutException: callTimeout=1200000, callDuration=2350283: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'tome_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster/172.10.96.43:16020,16020,1588595144765, seqNum=-1
row 'row_key' on table 'some_table' at null
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
at internal.holly.devoptools.hbase.HBaseCli.main(HBaseCli.java:77)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Mon May 11 13:53:42 CEST 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=70631: Call to slave-5.cluster/172.10.96.43:16020 failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] row 'some_table,some_key,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=slave-5.cluster,16020,1588595144765, seqNum=-1
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:298)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:856)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:759)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:745)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:716)
at org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
at org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
at org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
... 3 more

谢谢。

最后添加这个道具就成功了:

conf.set("hadoop.security.authentication", "kerberos");

这是我的最后一个代码:

public static void main(String[] args) throws IOException, InterruptedException {
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
Configuration conf = HBaseConfiguration.create();
conf.set("hadoop.security.authentication", "kerberos");
String principal = "user@REALM";
UserGroupInformation.setConfiguration(conf);
UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, "/home/user/principal.keytab");
Connection conn = ugi.doAs(new PrivilegedExceptionAction<Connection>() {
@Override
public Connection run() throws Exception {
return ConnectionFactory.createConnection(conf);
}
});
TableName tableName = TableName.valueOf("some_table");
Table table = conn.getTable(tableName);
Result r = table.get(new Get(Bytes.toBytes("some_key")));
System.out.println("result: " + r);
}

我也遇到过同样的问题;在我的情况下,当我提交spark作业时,我在我的spark-submit命令中包含了hadoop*和hbase*jar;经过一些检查,我注意到我在我的纱线集群中包含了hadoop*和hbase*jar,它们与hbase/hadoop的相同版本不匹配。这在那些jar中只是一个小区别,但它把kerberos身份验证搞砸了

相关内容

  • 没有找到相关文章

最新更新