本文整理了Java中gov.sandia.cognition.math.matrix.Vector.plusEquals()
方法的一些代码示例,展示了Vector.plusEquals()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.plusEquals()
方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:plusEquals
暂无
代码示例来源:origin: algorithmfoundry/Foundry
@Override
public Vector evaluate(
Vector input)
{
Vector discriminant = super.evaluate( input );
discriminant.plusEquals(this.bias);
return discriminant;
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final public Vector plus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.plusEquals(this);
return result;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core
@Override
final public Vector plus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.plusEquals(this);
return result;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
@Override
public Vector evaluate(
Vector input)
{
Vector discriminant = super.evaluate( input );
discriminant.plusEquals(this.bias);
return discriminant;
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final public Vector plus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.plusEquals(this);
return result;
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
public Vector evaluate(
Vector input)
{
Vector discriminant = super.evaluate( input );
discriminant.plusEquals(this.bias);
return discriminant;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Computes the raw (unsquashed) activation at the hidden layer for the
* given input.
* @param input
* Input to compute the raw hidden activation of.
* @return
* Raw (unsquashed) activation at the hidden layer.
*/
protected Vector evaluateHiddenLayerActivation(
Vector input )
{
Vector hiddenActivation = this.inputToHiddenWeights.times( input );
hiddenActivation.plusEquals( this.inputToHiddenBiasWeights );
return hiddenActivation;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Computes the raw (unsquashed) activation at the hidden layer for the
* given input.
* @param input
* Input to compute the raw hidden activation of.
* @return
* Raw (unsquashed) activation at the hidden layer.
*/
protected Vector evaluateHiddenLayerActivation(
Vector input )
{
Vector hiddenActivation = this.inputToHiddenWeights.times( input );
hiddenActivation.plusEquals( this.inputToHiddenBiasWeights );
return hiddenActivation;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Evaluates the output from the squashed hidden-layer activation.
* @param squashedHiddenActivation
* Squashed hidden-layer activation.
* @return
* Output of the neural net.
*/
protected Vector evaluateOutputFromSquashedHiddenLayerActivation(
Vector squashedHiddenActivation )
{
Vector outputActivation = this.hiddenToOutputWeights.times(
squashedHiddenActivation );
outputActivation.plusEquals( this.hiddenToOutputBiasWeights );
return outputActivation;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Evaluates the output from the squashed hidden-layer activation.
* @param squashedHiddenActivation
* Squashed hidden-layer activation.
* @return
* Output of the neural net.
*/
protected Vector evaluateOutputFromSquashedHiddenLayerActivation(
Vector squashedHiddenActivation )
{
Vector outputActivation = this.hiddenToOutputWeights.times(
squashedHiddenActivation );
outputActivation.plusEquals( this.hiddenToOutputBiasWeights );
return outputActivation;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Computes the raw (unsquashed) activation at the hidden layer for the
* given input.
* @param input
* Input to compute the raw hidden activation of.
* @return
* Raw (unsquashed) activation at the hidden layer.
*/
protected Vector evaluateHiddenLayerActivation(
Vector input )
{
Vector hiddenActivation = this.inputToHiddenWeights.times( input );
hiddenActivation.plusEquals( this.inputToHiddenBiasWeights );
return hiddenActivation;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Evaluates the output from the squashed hidden-layer activation.
* @param squashedHiddenActivation
* Squashed hidden-layer activation.
* @return
* Output of the neural net.
*/
protected Vector evaluateOutputFromSquashedHiddenLayerActivation(
Vector squashedHiddenActivation )
{
Vector outputActivation = this.hiddenToOutputWeights.times(
squashedHiddenActivation );
outputActivation.plusEquals( this.hiddenToOutputBiasWeights );
return outputActivation;
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final public Vector minus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.negativeEquals();
result.plusEquals(this);
return result;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core
@Override
final public Vector minus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.negativeEquals();
result.plusEquals(this);
return result;
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final public Vector minus(
final Vector v)
{
// I need to flip this so that if it the input is a dense vector, I
// return a dense vector. If it's a sparse vector, then a sparse vector
// is still returned.
Vector result = v.clone();
result.negativeEquals();
result.plusEquals(this);
return result;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core
public Vector evaluate(
Vector input)
{
Vector xnm1 = this.getState();
Vector xn = A.times(xnm1);
xn.plusEquals( B.times(input) );
this.setState(xn);
return C.times(xn);
}
代码示例来源:origin: algorithmfoundry/Foundry
public Vector evaluate(
Vector input)
{
Vector xnm1 = this.getState();
Vector xn = A.times(xnm1);
xn.plusEquals( B.times(input) );
this.setState(xn);
return C.times(xn);
}
代码示例来源:origin: algorithmfoundry/Foundry
public Vector evaluate(
Vector input)
{
Vector xnm1 = this.getState();
Vector xn = A.times(xnm1);
xn.plusEquals( B.times(input) );
this.setState(xn);
return C.times(xn);
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final protected double iterate()
{
Vector q = A.evaluate(residual);
double alpha = delta / (residual.dotProduct(q));
x.plusEquals(residual.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
delta = residual.dotProduct(residual);
return delta;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
@Override
final protected double iterate()
{
Vector q = A.evaluate(residual);
double alpha = delta / (residual.dotProduct(q));
x.plusEquals(residual.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
delta = residual.dotProduct(residual);
return delta;
}
内容来源于网络,如有侵权,请联系作者删除!