org.datavec.api.writable.Writable.toDouble()方法的使用及代码示例

x33g5p2x  于2022-02-03 转载在 其他  
字(3.8k)|赞(0)|评价(0)|浏览(112)

本文整理了Java中org.datavec.api.writable.Writable.toDouble()方法的一些代码示例,展示了Writable.toDouble()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Writable.toDouble()方法的具体详情如下:
包路径:org.datavec.api.writable.Writable
类名称:Writable
方法名:toDouble

Writable.toDouble介绍

暂无

代码示例

代码示例来源:origin: org.datavec/datavec-spark_2.11

@Override
  public double call(Writable writable) throws Exception {
    return writable.toDouble();
  }
}

代码示例来源:origin: org.datavec/datavec-spark

@Override
  public double call(Writable writable) throws Exception {
    return writable.toDouble();
  }
}

代码示例来源:origin: org.datavec/datavec-spark_2.11

@Override
public HistogramCounter add(Writable w) {
  double d = w.toDouble();
  //Not super efficient, but linear search on 20-50 items should be good enough
  int idx = -1;
  for (int i = 0; i < nBins; i++) {
    if (d >= bins[i] && d < bins[i + 1]) {
      idx = i;
      break;
    }
  }
  if (idx == -1)
    idx = nBins - 1;
  binCounts[idx]++;
  return this;
}

代码示例来源:origin: org.datavec/datavec-spark

@Override
public HistogramCounter add(Writable w) {
  double d = w.toDouble();
  //Not super efficient, but linear search on 20-50 items should be good enough
  int idx = -1;
  for (int i = 0; i < nBins; i++) {
    if (d >= bins[i] && d < bins[i + 1]) {
      idx = i;
      break;
    }
  }
  if (idx == -1)
    idx = nBins - 1;
  binCounts[idx]++;
  return this;
}

代码示例来源:origin: org.datavec/datavec-spark_2.11

switch (schema.getColumnTypes().get(i)) {
  case Double:
    values[i + 2] = step.get(i).toDouble();
    break;
  case Integer:

代码示例来源:origin: org.datavec/datavec-spark

switch (schema.getColumnTypes().get(i)) {
  case Double:
    values[i + 2] = step.get(i).toDouble();
    break;
  case Integer:

代码示例来源:origin: org.datavec/datavec-spark_2.11

switch (schema.getColumnTypes().get(i)) {
  case Double:
    values[i] = v1.get(i).toDouble();
    break;
  case Integer:

代码示例来源:origin: org.datavec/datavec-spark

switch (schema.getColumnTypes().get(i)) {
  case Double:
    values[i] = v1.get(i).toDouble();
    break;
  case Integer:

代码示例来源:origin: org.datavec/datavec-hadoop

break;
case Double:
  newWritable = new DoubleWritable(writable.toDouble());
  break;
case Float:

代码示例来源:origin: org.datavec/datavec-spark_2.11

@Override
public DoubleAnalysisCounter add(Writable writable) {
  double value = writable.toDouble();
  if (value == 0)
    countZero++;
  if (value == Double.NaN)
    countNaN++;
  if (value == getMinValueSeen())
    countMinValue++;
  else if (value < getMinValueSeen()) {
    countMinValue = 1;
  }
  if (value == getMaxValueSeen())
    countMaxValue++;
  else if (value > getMaxValueSeen()) {
    countMaxValue = 1;
  }
  if (value >= 0) {
    countPositive++;
  } else {
    countNegative++;
  } ;
  counter.merge(value);
  return this;
}

代码示例来源:origin: org.datavec/datavec-spark

@Override
public DoubleAnalysisCounter add(Writable writable) {
  double value = writable.toDouble();
  if (value == 0)
    countZero++;
  if (value == Double.NaN)
    countNaN++;
  if (value == getMinValueSeen())
    countMinValue++;
  else if (value < getMinValueSeen()) {
    countMinValue = 1;
  }
  if (value == getMaxValueSeen())
    countMaxValue++;
  else if (value > getMaxValueSeen()) {
    countMaxValue = 1;
  }
  if (value >= 0) {
    countPositive++;
  } else {
    countNegative++;
  } ;
  counter.merge(value);
  return this;
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-datavec-iterators

k += toPut.length();
} else {
  arr.putScalar(i, k, w.toDouble());
  k++;

代码示例来源:origin: org.deeplearning4j/deeplearning4j-datavec-iterators

j += row.length();
} else {
  arr.putScalar(i, j, k, w.toDouble());
  j++;
  arr.putScalar(i, l++, k, w.toDouble());

代码示例来源:origin: org.deeplearning4j/deeplearning4j-core

k += toPut.length();
} else {
  arr.putScalar(i, k, w.toDouble());
  k++;

代码示例来源:origin: org.deeplearning4j/deeplearning4j-core

j += row.length();
} else {
  arr.putScalar(i, j, k, w.toDouble());
  j++;
  arr.putScalar(i, l++, k, w.toDouble());

代码示例来源:origin: rahul-raj/Deeplearning4J

List<Writable> record = transformProcessRecordReader.next();
for(int i=1;i<=4991;i++){
   max[i]=Math.max(max[i],record.get(i).toDouble());
   min[i]=Math.min(min[i],record.get(i).toDouble());

代码示例来源:origin: org.datavec/datavec-spark_2.11

idx += subLength;
} else {
  arr.putScalar(idx++, w.toDouble());

代码示例来源:origin: org.datavec/datavec-spark

idx += subLength;
} else {
  arr.putScalar(idx++, w.toDouble());

相关文章