gov.sandia.cognition.math.matrix.Vector.times()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(7.8k)|赞(0)|评价(0)|浏览(105)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.times()方法的一些代码示例,展示了Vector.times()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.times()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:times

Vector.times介绍

[英]Premultiplies the matrix by the vector "this"
[中]

代码示例

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Return A^(T) * input.
 *
 * @param input The vector to multiply by the transpose of A
 * @return A^(T) * input
 */
public Vector transposeMult(Vector input)
{
  // NOTE: This computes A^(T)x by the following:
  // return (x^(T)A)^(T)
  // But as we don't have to transpose vectors in this code, it requires
  // no real transposes at all.
  return input.times(m);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Return A^(T) * input.
 *
 * @param input The vector to multiply by the transpose of A
 * @return A^(T) * input
 */
public Vector transposeMult(Vector input)
{
  // NOTE: This computes A^(T)x by the following:
  // return (x^(T)A)^(T)
  // But as we don't have to transpose vectors in this code, it requires
  // no real transposes at all.
  return input.times(m);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Return A^(T) * input.
 *
 * @param input The vector to multiply by the transpose of A
 * @return A^(T) * input
 */
public Vector transposeMult(Vector input)
{
  // NOTE: This computes A^(T)x by the following:
  // return (x^(T)A)^(T)
  // But as we don't have to transpose vectors in this code, it requires
  // no real transposes at all.
  return input.times(m);
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector evaluate(
  final Vectorizable input)
{
  // Apply the transform to the input vector.
  return input.convertToVector().times(this.transform);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

public Vector evaluate(
  final Vectorizable input)
{
  // Apply the transform to the input vector.
  return input.convertToVector().times(this.transform);
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector evaluate(
  final Vectorizable input)
{
  // Apply the transform to the input vector.
  return input.convertToVector().times(this.transform);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the scale component for the inverse-gamma distribution
 * @return
 * Scale component for the inverse-gamma distribution
 */
public double getScale()
{
  Vector mean = this.getMean();
  Matrix Ci = this.covarianceInverse;
  return 0.5 * (this.outputSumSquared - mean.times(Ci).dotProduct(mean));
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the scale component for the inverse-gamma distribution
 * @return
 * Scale component for the inverse-gamma distribution
 */
public double getScale()
{
  Vector mean = this.getMean();
  Matrix Ci = this.covarianceInverse;
  return 0.5 * (this.outputSumSquared - mean.times(Ci).dotProduct(mean));
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the scale component for the inverse-gamma distribution
 * @return
 * Scale component for the inverse-gamma distribution
 */
public double getScale()
{
  Vector mean = this.getMean();
  Matrix Ci = this.covarianceInverse;
  return 0.5 * (this.outputSumSquared - mean.times(Ci).dotProduct(mean));
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Normalizes the given double value by subtracting the mean and dividing
 * by the standard deviation (the square root of the variance).
 *
 * @param  value The value to normalize.
 * @return The normalized value.
 */
public Vector evaluate(
  final Vectorizable value)
{
  final Vector input = value.convertToVector();
  return input.minus(this.getMean()).times(
    this.getCovarianceInverseSquareRoot());
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Normalizes the given double value by subtracting the mean and dividing
 * by the standard deviation (the square root of the variance).
 *
 * @param  value The value to normalize.
 * @return The normalized value.
 */
public Vector evaluate(
  final Vectorizable value)
{
  final Vector input = value.convertToVector();
  return input.minus(this.getMean()).times(
    this.getCovarianceInverseSquareRoot());
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Normalizes the given double value by subtracting the mean and dividing
 * by the standard deviation (the square root of the variance).
 *
 * @param  value The value to normalize.
 * @return The normalized value.
 */
public Vector evaluate(
  final Vectorizable value)
{
  final Vector input = value.convertToVector();
  return input.minus(this.getMean()).times(
    this.getCovarianceInverseSquareRoot());
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

public double logEvaluate(
  Vector input)
{
  final int dim = this.getInputDimensionality();
  final double logDet = this.getLogDeterminantPrecision();
  final Vector delta = input.minus(this.mean);
  final double z2 = delta.times(this.getPrecision()).dotProduct(delta);
  final double d2pv2 = dim/2.0+this.degreesOfFreedom/2.0;
  double logSum = 0.0;
  logSum += MathUtil.logGammaFunction( d2pv2 );
  logSum -= MathUtil.logGammaFunction( this.degreesOfFreedom/2.0 );
  logSum += 0.5 * logDet;
  logSum -= (dim/2.0)*Math.log(Math.PI*this.degreesOfFreedom);
  logSum -= d2pv2*Math.log( 1.0 + z2/this.degreesOfFreedom );
  return logSum;
}

代码示例来源:origin: algorithmfoundry/Foundry

public double logEvaluate(
  Vector input)
{
  final int dim = this.getInputDimensionality();
  final double logDet = this.getLogDeterminantPrecision();
  final Vector delta = input.minus(this.mean);
  final double z2 = delta.times(this.getPrecision()).dotProduct(delta);
  final double d2pv2 = dim/2.0+this.degreesOfFreedom/2.0;
  double logSum = 0.0;
  logSum += MathUtil.logGammaFunction( d2pv2 );
  logSum -= MathUtil.logGammaFunction( this.degreesOfFreedom/2.0 );
  logSum += 0.5 * logDet;
  logSum -= (dim/2.0)*Math.log(Math.PI*this.degreesOfFreedom);
  logSum -= d2pv2*Math.log( 1.0 + z2/this.degreesOfFreedom );
  return logSum;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public UnivariateGaussian.PDF evaluate(
  Vectorizable input)
{
  // Bishop's equations 3.58-3.59
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double variance = x.times( this.posterior.getCovariance() ).dotProduct(x) + outputVariance;
  return new UnivariateGaussian.PDF( mean, variance );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public UnivariateGaussian.PDF evaluate(
  Vectorizable input)
{
  // Bishop's equations 3.58-3.59
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double variance = x.times( this.posterior.getCovariance() ).dotProduct(x) + outputVariance;
  return new UnivariateGaussian.PDF( mean, variance );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public UnivariateGaussian.PDF evaluate(
  Vectorizable input)
{
  // Bishop's equations 3.58-3.59
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double variance = x.times( this.posterior.getCovariance() ).dotProduct(x) + outputVariance;
  return new UnivariateGaussian.PDF( mean, variance );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public StudentTDistribution evaluate(
  Vectorizable input)
{
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double dofs = this.posterior.getInverseGamma().getShape() * 2.0;
  double v = x.times( this.posterior.getGaussian().getCovariance() ).dotProduct(x);
  double anbn = this.posterior.getInverseGamma().getShape() / this.posterior.getInverseGamma().getScale();
  double precision = anbn / (1.0 + v);
  return new StudentTDistribution( dofs, mean, precision );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public StudentTDistribution evaluate(
  Vectorizable input)
{
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double dofs = this.posterior.getInverseGamma().getShape() * 2.0;
  double v = x.times( this.posterior.getGaussian().getCovariance() ).dotProduct(x);
  double anbn = this.posterior.getInverseGamma().getShape() / this.posterior.getInverseGamma().getScale();
  double precision = anbn / (1.0 + v);
  return new StudentTDistribution( dofs, mean, precision );
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public StudentTDistribution evaluate(
  Vectorizable input)
{
  Vector x = input.convertToVector();
  double mean = x.dotProduct( this.posterior.getMean() );
  double dofs = this.posterior.getInverseGamma().getShape() * 2.0;
  double v = x.times( this.posterior.getGaussian().getCovariance() ).dotProduct(x);
  double anbn = this.posterior.getInverseGamma().getShape() / this.posterior.getInverseGamma().getScale();
  double precision = anbn / (1.0 + v);
  return new StudentTDistribution( dofs, mean, precision );
}

相关文章