本文整理了Java中org.apache.hadoop.ipc.Server.getRemoteIp()
方法的一些代码示例,展示了Server.getRemoteIp()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Server.getRemoteIp()
方法的具体详情如下:
包路径:org.apache.hadoop.ipc.Server
类名称:Server
方法名:getRemoteIp
[英]Returns the remote side ip address when invoked inside an RPC Returns null incase of an error.
[中]在RPC内部调用时返回远程端ip地址。如果出现错误,则返回null。
代码示例来源:origin: org.apache.hadoop/hadoop-common
/** Returns remote address as a string when invoked inside an RPC.
* Returns null in case of an error.
*/
public static String getRemoteAddress() {
InetAddress addr = getRemoteIp();
return (addr == null) ? null : addr.getHostAddress();
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
private void updateLastPromisedEpoch(long newEpoch) throws IOException {
LOG.info("Updating lastPromisedEpoch from " + lastPromisedEpoch.get() +
" to " + newEpoch + " for client " + Server.getRemoteIp() +
" ; journal id: " + journalId);
lastPromisedEpoch.set(newEpoch);
// Since we have a new writer, reset the IPC serial - it will start
// counting again from 0 for this writer.
currentEpochIpcSerial = -1;
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
NameNode.blockStateChangeLog.debug(
"BLOCK NameSystem.addToCorruptReplicasMap: {} added as corrupt on "
+ "{} by {} {}", blk, dn, Server.getRemoteIp(),
reasonText);
} else {
"BLOCK NameSystem.addToCorruptReplicasMap: duplicate requested for" +
" {} to add as corrupt on {} by {} {}", blk, dn,
Server.getRemoteIp(), reasonText);
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
private void logAuditEvent(boolean succeeded, String cmd, String src,
String dst, FileStatus stat) throws IOException {
if (isAuditEnabled() && isExternalInvocation()) {
logAuditEvent(succeeded, Server.getRemoteUser(), Server.getRemoteIp(),
cmd, src, dst, stat);
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
synchronized void monitorHealth()
throws HealthCheckFailedException, AccessControlException {
namesystem.checkSuperuserPrivilege();
if (!haEnabled) {
return; // no-op, if HA is not enabled
}
long start = Time.monotonicNow();
getNamesystem().checkAvailableResources();
long end = Time.monotonicNow();
if (end - start >= HEALTH_MONITOR_WARN_THRESHOLD_MS) {
// log a warning if it take >= 5 seconds.
LOG.warn("Remote IP {} checking available resources took {}ms",
Server.getRemoteIp(), end - start);
}
if (!getNamesystem().nameNodeHasResourcesAvailable()) {
throw new HealthCheckFailedException(
"The NameNode has no resources available");
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
LOG.info("Updating lastWriterEpoch from " + curLastWriterEpoch +
" to " + reqInfo.getEpoch() + " for client " +
Server.getRemoteIp() + " ; journal id: " + journalId);
lastWriterEpoch.set(reqInfo.getEpoch());
代码示例来源:origin: io.hops/hadoop-common
/** Returns remote address as a string when invoked inside an RPC.
* Returns null in case of an error.
*/
public static String getRemoteAddress() {
InetAddress addr = getRemoteIp();
return (addr == null) ? null : addr.getHostAddress();
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-common
/** Returns remote address as a string when invoked inside an RPC.
* Returns null in case of an error.
*/
public static String getRemoteAddress() {
InetAddress addr = getRemoteIp();
return (addr == null) ? null : addr.getHostAddress();
}
代码示例来源:origin: com.facebook.hadoop/hadoop-core
/** Returns remote address as a string when invoked inside an RPC.
* Returns null in case of an error.
*/
public static String getRemoteAddress() {
InetAddress addr = getRemoteIp();
return (addr == null) ? null : addr.getHostAddress();
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
"IPC serial %s from client %s was not higher than prior highest " +
"IPC serial %s ; journal id: %s", reqInfo.getIpcSerialNumber(),
Server.getRemoteIp(), currentEpochIpcSerial, journalId);
currentEpochIpcSerial = reqInfo.getIpcSerialNumber();
代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-resourcemanager
static String createSuccessLog(String user, String operation, String target,
ApplicationId appId, ApplicationAttemptId attemptId,
ContainerId containerId, Resource resource) {
return createSuccessLog(user, operation, target, appId, attemptId,
containerId, resource, null, Server.getRemoteIp());
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-hs
/**
* A helper api to add remote IP address
*/
static void addRemoteIP(StringBuilder b) {
InetAddress ip = Server.getRemoteIp();
// ip address can be null for testcases
if (ip != null) {
add(Keys.IP, ip.getHostAddress(), b);
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-resourcemanager
public static void logSuccess(String user, String operation, String target,
ApplicationId appId, CallerContext callerContext) {
if (LOG.isInfoEnabled()) {
LOG.info(createSuccessLog(user, operation, target, appId, null, null,
null, callerContext, Server.getRemoteIp()));
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-resourcemanager
/**
* A helper api to add remote IP address.
*/
static void addRemoteIP(StringBuilder b) {
InetAddress ip = Server.getRemoteIp();
// ip address can be null for testcases
if (ip != null) {
add(Keys.IP, ip.getHostAddress(), b);
}
}
代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs
private static InetAddress getRemoteIp() {
InetAddress ip = Server.getRemoteIp();
if (ip != null) {
return ip;
}
return NamenodeWebHdfsMethods.getRemoteIp();
}
代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core
/**
* A helper api to add remote IP address
*/
static void addRemoteIP(StringBuilder b) {
InetAddress ip = Server.getRemoteIp();
// ip address can be null for testcases
if (ip != null) {
add(Keys.IP, ip.getHostAddress(), b);
}
}
代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-hs
/**
* A helper api to add remote IP address
*/
static void addRemoteIP(StringBuilder b) {
InetAddress ip = Server.getRemoteIp();
// ip address can be null for testcases
if (ip != null) {
add(Keys.IP, ip.getHostAddress(), b);
}
}
代码示例来源:origin: org.apache.hadoop/hadoop-hdfs
InetAddress dnAddress = Server.getRemoteIp();
if (dnAddress != null) {
代码示例来源:origin: io.prestosql.hadoop/hadoop-apache
private void updateLastPromisedEpoch(long newEpoch) throws IOException {
LOG.info("Updating lastPromisedEpoch from " + lastPromisedEpoch.get() +
" to " + newEpoch + " for client " + Server.getRemoteIp());
lastPromisedEpoch.set(newEpoch);
// Since we have a new writer, reset the IPC serial - it will start
// counting again from 0 for this writer.
currentEpochIpcSerial = -1;
}
代码示例来源:origin: org.apache.hadoop/hadoop-yarn-server-resourcemanager
private void testSuccessLogFormatHelper(boolean checkIP, ApplicationId appId,
ApplicationAttemptId attemptId, ContainerId containerId,
CallerContext callerContext, Resource resource) {
testSuccessLogFormatHelper(checkIP, appId, attemptId, containerId,
callerContext, resource, Server.getRemoteIp());
}
内容来源于网络,如有侵权,请联系作者删除!