gov.sandia.cognition.math.matrix.Vector.outerProduct()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(8.0k)|赞(0)|评价(0)|浏览(90)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.outerProduct()方法的一些代码示例,展示了Vector.outerProduct()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.outerProduct()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:outerProduct

Vector.outerProduct介绍

[英]Computes the outer matrix product between the two vectors
[中]计算两个向量之间的外矩阵积

代码示例

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: algorithmfoundry/Foundry

this.covarianceInverse = x1.outerProduct(x2);
this.covarianceInverse.plusEquals( x1.outerProduct(x2) );

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

this.covarianceInverse = x1.outerProduct(x2);
this.covarianceInverse.plusEquals( x1.outerProduct(x2) );

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

for( Vector x : xs )
  sum.accumulate(x.outerProduct(x));

代码示例来源:origin: algorithmfoundry/Foundry

for( Vector x : xs )
  sum.accumulate(x.outerProduct(x));

代码示例来源:origin: algorithmfoundry/Foundry

for( Vector x : xs )
  sum.accumulate(x.outerProduct(x));

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Fits a single MultivariateGaussian to the given MixtureOfGaussians
 * @return MultivariateGaussian that captures the mean and covariance of
 * the given MixtureOfGaussians
 */
public MultivariateGaussian.PDF fitSingleGaussian()
{
  Vector mean = this.getMean();
  RingAccumulator<Matrix> covarianceAccumulator =
    new RingAccumulator<Matrix>();
  double denom = this.getPriorWeightSum();
  for( int i = 0; i < this.getDistributionCount(); i++ )
  {
    MultivariateGaussian Gaussian =
      (MultivariateGaussian) this.getDistributions().get(i);
    Vector meanDiff = Gaussian.getMean().minus( mean );
    covarianceAccumulator.accumulate( Gaussian.getCovariance().plus(
      meanDiff.outerProduct( meanDiff ) ).scale(
        this.priorWeights[i]/denom ) );
  }
  return new MultivariateGaussian.PDF( mean, covarianceAccumulator.getSum() );
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Fits a single MultivariateGaussian to the given MixtureOfGaussians
 * @return MultivariateGaussian that captures the mean and covariance of
 * the given MixtureOfGaussians
 */
public MultivariateGaussian.PDF fitSingleGaussian()
{
  Vector mean = this.getMean();
  RingAccumulator<Matrix> covarianceAccumulator =
    new RingAccumulator<Matrix>();
  double denom = this.getPriorWeightSum();
  for( int i = 0; i < this.getDistributionCount(); i++ )
  {
    MultivariateGaussian Gaussian =
      (MultivariateGaussian) this.getDistributions().get(i);
    Vector meanDiff = Gaussian.getMean().minus( mean );
    covarianceAccumulator.accumulate( Gaussian.getCovariance().plus(
      meanDiff.outerProduct( meanDiff ) ).scale(
        this.priorWeights[i]/denom ) );
  }
  return new MultivariateGaussian.PDF( mean, covarianceAccumulator.getSum() );
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Fits a single MultivariateGaussian to the given MixtureOfGaussians
 * @return MultivariateGaussian that captures the mean and covariance of
 * the given MixtureOfGaussians
 */
public MultivariateGaussian.PDF fitSingleGaussian()
{
  Vector mean = this.getMean();
  RingAccumulator<Matrix> covarianceAccumulator =
    new RingAccumulator<Matrix>();
  double denom = this.getPriorWeightSum();
  for( int i = 0; i < this.getDistributionCount(); i++ )
  {
    MultivariateGaussian Gaussian =
      (MultivariateGaussian) this.getDistributions().get(i);
    Vector meanDiff = Gaussian.getMean().minus( mean );
    covarianceAccumulator.accumulate( Gaussian.getCovariance().plus(
      meanDiff.outerProduct( meanDiff ) ).scale(
        this.priorWeights[i]/denom ) );
  }
  return new MultivariateGaussian.PDF( mean, covarianceAccumulator.getSum() );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public void update(
  Vector value)
{
  // We've added another value.
  this.count++;
  // Compute the difference between the value and the current mean.
  final int dim = value.getDimensionality();
  if (this.mean == null)
  {
    this.mean = VectorFactory.getDefault().createVector(dim);
  }
  Vector delta = value.minus(this.mean);
  // Update the mean based on the difference between the value
  // and the mean along with the new count.
  this.mean.plusEquals(delta.scale(1.0 / this.count));
  // Update the squared differences from the mean, using the new
  // mean in the process.
  if (this.sumSquaredDifferences == null)
  {
    this.sumSquaredDifferences
      = MatrixFactory.getDefault().createIdentity(dim, dim);
    this.sumSquaredDifferences.scaleEquals(
      this.getDefaultCovariance());
  }
  Vector delta2 = value.minus(this.mean);
  this.sumSquaredDifferences.plusEquals(delta.outerProduct(delta2));
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public void update(
  Vector value)
{
  // We've added another value.
  this.count++;
  // Compute the difference between the value and the current mean.
  final int dim = value.getDimensionality();
  if (this.mean == null)
  {
    this.mean = VectorFactory.getDefault().createVector(dim);
  }
  Vector delta = value.minus(this.mean);
  // Update the mean based on the difference between the value
  // and the mean along with the new count.
  this.mean.plusEquals(delta.scale(1.0 / this.count));
  // Update the squared differences from the mean, using the new
  // mean in the process.
  if (this.sumSquaredDifferences == null)
  {
    this.sumSquaredDifferences
      = MatrixFactory.getDefault().createIdentity(dim, dim);
    this.sumSquaredDifferences.scaleEquals(
      this.getDefaultCovariance());
  }
  Vector delta2 = value.minus(this.mean);
  this.sumSquaredDifferences.plusEquals(delta.outerProduct(delta2));
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public void update(
  Vector value)
{
  // We've added another value.
  this.count++;
  // Compute the difference between the value and the current mean.
  final int dim = value.getDimensionality();
  if (this.mean == null)
  {
    this.mean = VectorFactory.getDefault().createVector(dim);
  }
  Vector delta = value.minus(this.mean);
  // Update the mean based on the difference between the value
  // and the mean along with the new count.
  this.mean.plusEquals(delta.scale(1.0 / this.count));
  // Update the squared differences from the mean, using the new
  // mean in the process.
  if (this.sumSquaredDifferences == null)
  {
    this.sumSquaredDifferences
      = MatrixFactory.getDefault().createIdentity(dim, dim);
    this.sumSquaredDifferences.scaleEquals(
      this.getDefaultCovariance());
  }
  Vector delta2 = value.minus(this.mean);
  this.sumSquaredDifferences.plusEquals(delta.outerProduct(delta2));
}

代码示例来源:origin: algorithmfoundry/Foundry

double denom = 1.0 + delta2.dotProduct(Aiu);
vtAi.scaleEquals(1.0 / denom);
Matrix update = Aiu.outerProduct(vtAi);
this.sumSquaredDifferencesInverse.minusEquals(update);

代码示例来源:origin: algorithmfoundry/Foundry

Cin.accumulate( x1.outerProduct(x2) );

代码示例来源:origin: algorithmfoundry/Foundry

Cin.accumulate( x1.outerProduct(x2) );

代码示例来源:origin: algorithmfoundry/Foundry

betahat.plusEquals( delta.outerProduct(delta.scale((n*nu)/nuhat)) );

代码示例来源:origin: algorithmfoundry/Foundry

betahat.plusEquals( delta.outerProduct(delta.scale((n*nu)/nuhat)) );

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

betahat.plusEquals( delta.outerProduct(delta.scale((n*nu)/nuhat)) );

相关文章