本文整理了Java中weka.classifiers.bayes.NaiveBayes
类的一些代码示例,展示了NaiveBayes
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。NaiveBayes
类的具体详情如下:
包路径:weka.classifiers.bayes.NaiveBayes
类名称:NaiveBayes
[英]Class for a Naive Bayes classifier using estimator classes. Numeric estimator precision values are chosen based on analysis of the training data. For this reason, the classifier is not an UpdateableClassifier (which in typical usage are initialized with zero training instances) -- if you need the UpdateableClassifier functionality, use the NaiveBayesUpdateable classifier. The NaiveBayesUpdateable classifier will use a default precision of 0.1 for numeric attributes when buildClassifier is called with zero training instances.
For more information on Naive Bayes classifiers, see
George H. John, Pat Langley: Estimating Continuous Distributions in Bayesian Classifiers. In: Eleventh Conference on Uncertainty in Artificial Intelligence, San Mateo, 338-345, 1995.
BibTeX:
@inproceedings{John1995,
address = {San Mateo},
author = {George H. John and Pat Langley},
booktitle = {Eleventh Conference on Uncertainty in Artificial Intelligence},
pages = {338-345},
publisher = {Morgan Kaufmann},
title = {Estimating Continuous Distributions in Bayesian Classifiers},
year = {1995}
}
Valid options are:
-K
Use kernel density estimator rather than normal
distribution for numeric attributes
-D
Use supervised discretization to process numeric attributes
-O
Display model in old format (good when there are many classes)
[中]使用估计器类为朴素贝叶斯分类器初始化。基于对训练数据的分析,选择数值估计器精度值。由于这个原因,分类器不是一个UpdateableClassifier(在典型用法中,它是用零训练实例初始化的)——如果需要UpdateableClassifier功能,请使用NaiveBayesUpdateable分类器。当使用零训练实例调用buildClassifier时,NaiveBayesUpdateable分类器将对数值属性使用默认精度0.1。
有关朴素贝叶斯分类器的更多信息,请参阅
乔治·H·约翰,帕特·兰利:估计贝叶斯分类器中的连续分布。摘自:第十一届人工智能不确定性大会,圣马特奥,338-345,1995年。
BibTeX:
@inproceedings{John1995,
address = {San Mateo},
author = {George H. John and Pat Langley},
booktitle = {Eleventh Conference on Uncertainty in Artificial Intelligence},
pages = {338-345},
publisher = {Morgan Kaufmann},
title = {Estimating Continuous Distributions in Bayesian Classifiers},
year = {1995}
}
有效选项包括:
-K
Use kernel density estimator rather than normal
distribution for numeric attributes
-D
Use supervised discretization to process numeric attributes
-O
Display model in old format (good when there are many classes)
代码示例来源:origin: stackoverflow.com
Classifier cModel = (Classifier)new NaiveBayes();
cModel.buildClassifier(isTrainingSet);
weka.core.SerializationHelper.write("/some/where/nBayes.model", cModel);
Classifier cls = (Classifier) weka.core.SerializationHelper.read("/some/where/nBayes.model");
// Test the model
Evaluation eTest = new Evaluation(isTrainingSet);
eTest.evaluateModel(cls, isTrainingSet);
代码示例来源:origin: nz.ac.waikato.cms.weka/weka-stable
/**
* Returns the Capabilities of this filter.
*
* @return the capabilities of this object
* @see Capabilities
*/
@Override
public Capabilities getCapabilities() {
return new NaiveBayes().getCapabilities();
}
代码示例来源:origin: nz.ac.waikato.cms.weka/weka-stable
/**
* Main method for testing this class.
*
* @param argv the options
*/
public static void main(String[] argv) {
runClassifier(new NaiveBayes(), argv);
}
}
代码示例来源:origin: stackoverflow.com
NaiveBayes nB = new NaiveBayes();
nB.buildClassifier(train);
代码示例来源:origin: nz.ac.waikato.cms.weka/weka-stable
@Override
protected Instances process(Instances instances) throws Exception {
if (m_estimator == null) {
m_estimator = new NaiveBayes();
trainingData = Filter.useFilter(instances, m_remove);
m_estimator.buildClassifier(trainingData);
Estimator[][] estimators = m_estimator.getConditionalEstimators();
Instances header = m_estimator.getHeader();
int index = 0;
for (int i = 0; i < header.numAttributes(); i++) {
代码示例来源:origin: stackoverflow.com
public class Run {
public static void main(String[] args) throws Exception {
ConverterUtils.DataSource source1 = new ConverterUtils.DataSource("./data/train.arff");
Instances train = source1.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (train.classIndex() == -1)
train.setClassIndex(train.numAttributes() - 1);
ConverterUtils.DataSource source2 = new ConverterUtils.DataSource("./data/test.arff");
Instances test = source2.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (test.classIndex() == -1)
test.setClassIndex(train.numAttributes() - 1);
// model
NaiveBayes naiveBayes = new NaiveBayes();
naiveBayes.buildClassifier(train);
// this does the trick
double label = naiveBayes.classifyInstance(test.instance(0));
test.instance(0).setClassValue(label);
System.out.println(test.instance(0).stringValue(4));
}
}
代码示例来源:origin: nz.ac.waikato.cms.weka/weka-stable
getCapabilities().testWithFail(instances);
while (enumInsts.hasMoreElements()) {
Instance instance = enumInsts.nextElement();
updateClassifier(instance);
代码示例来源:origin: nz.ac.waikato.cms.weka/DTNB
m_NB.updateClassifier(instance);
double[] nbDist = m_NB.distributionForInstance(instance);
instance.setWeight(-instance.weight());
m_NB.updateClassifier(instance);
代码示例来源:origin: nz.ac.waikato.cms.weka/DTNB
m_NB = new NaiveBayes();
m_NB.buildClassifier(m_theInstances);
代码示例来源:origin: Waikato/weka-trunk
@Override
protected Instances process(Instances instances) throws Exception {
if (m_estimator == null) {
m_estimator = new NaiveBayes();
trainingData = Filter.useFilter(instances, m_remove);
m_estimator.buildClassifier(trainingData);
Estimator[][] estimators = m_estimator.getConditionalEstimators();
Instances header = m_estimator.getHeader();
int index = 0;
for (int i = 0; i < header.numAttributes(); i++) {
代码示例来源:origin: Waikato/weka-trunk
/**
* Main method for testing this class.
*
* @param argv the options
*/
public static void main(String[] argv) {
runClassifier(new NaiveBayes(), argv);
}
}
代码示例来源:origin: Waikato/weka-trunk
getCapabilities().testWithFail(instances);
while (enumInsts.hasMoreElements()) {
Instance instance = enumInsts.nextElement();
updateClassifier(instance);
代码示例来源:origin: nz.ac.waikato.cms.weka/DTNB
class_distribs[i][(int) inst.classValue()] -= inst.weight();
inst.setWeight(-inst.weight());
m_NB.updateClassifier(inst);
inst.setWeight(-inst.weight());
} else {
double[] nbDist = m_NB.distributionForInstance(inst);
m_NB.updateClassifier(inst);
} else {
class_distribs[i][0] += (inst.classValue() * inst.weight());
代码示例来源:origin: nz.ac.waikato.cms.weka/distributedWekaBase
public AggregateableFilteredClassifier() {
m_Classifier = new NaiveBayes();
}
代码示例来源:origin: nz.ac.waikato.cms.weka/distributedWekaBase
@Test
public void testScoreWithClassifier() throws Exception {
Instances train = new Instances(new BufferedReader(new StringReader(
CorrelationMatrixMapTaskTest.IRIS)));
train.setClassIndex(train.numAttributes() - 1);
NaiveBayes bayes = new NaiveBayes();
bayes.buildClassifier(train);
WekaScoringMapTask task = new WekaScoringMapTask();
task.setModel(bayes, train, train);
assertEquals(0, task.getMissingMismatchAttributeInfo().length());
assertEquals(3, task.getPredictionLabels().size());
for (int i = 0; i < train.numInstances(); i++) {
assertEquals(3, task.processInstance(train.instance(i)).length);
}
}
代码示例来源:origin: Waikato/weka-trunk
/**
* Returns the Capabilities of this filter.
*
* @return the capabilities of this object
* @see Capabilities
*/
@Override
public Capabilities getCapabilities() {
return new NaiveBayes().getCapabilities();
}
代码示例来源:origin: nz.ac.waikato.cms.weka/weka-stable
/** Creates a default NaiveBayes */
public Classifier getClassifier() {
return new NaiveBayes();
}
代码示例来源:origin: nz.ac.waikato.cms.weka/DTNB
m_NB = new NaiveBayes();
m_NB.buildClassifier(m_theInstances);
代码示例来源:origin: stackoverflow.com
Classifier Mode; // a parent class
if(alg.equals("DecisionStump")) {
Mode = new DecisionStump();
} else if(alg.equals("NaiveBayes")) {
Mode = new NaiveBayes();
}
代码示例来源:origin: nz.ac.waikato.cms.weka/distributedWekaBase
@Test
public void testScoreWithClassifierSomeMissingFields() throws Exception {
Instances train = new Instances(new BufferedReader(new StringReader(
CorrelationMatrixMapTaskTest.IRIS)));
train.setClassIndex(train.numAttributes() - 1);
NaiveBayes bayes = new NaiveBayes();
bayes.buildClassifier(train);
WekaScoringMapTask task = new WekaScoringMapTask();
Remove r = new Remove();
r.setAttributeIndices("1");
r.setInputFormat(train);
Instances test = Filter.useFilter(train, r);
task.setModel(bayes, train, test);
assertTrue(task.getMissingMismatchAttributeInfo().length() > 0);
assertTrue(task.getMissingMismatchAttributeInfo().equals(
"sepallength missing from incoming data\n"));
assertEquals(3, task.getPredictionLabels().size());
for (int i = 0; i < test.numInstances(); i++) {
assertEquals(3, task.processInstance(test.instance(i)).length);
}
}
内容来源于网络,如有侵权,请联系作者删除!