[('We first segment the data by the class, and then compute the mean and variance of x {\\displaystyle x} in each class.',
3.283018867924528),
('For example, the naive Bayes classifier will make the correct MAP decision rule classification so long as the correct class is more probable than any other class.',
2.773584905660377),
('Like the multinomial model, this model is popular for document classification tasks, where binary term occurrence features are used rather than term frequencies.',
2.7169811320754724),
('For example, suppose the training data contains a continuous attribute, x {\\displaystyle x} .',
2.7169811320754715),
('The discussion so far has derived the independent feature model, that is, the naive Bayes probability model.',
2.622641509433962),
('If a given class and feature value never occur together in the training data, then the frequency-based probability estimate will be zero.',
2.4150943396226414),
('Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one.',
2.377358490566038),
('The assumptions on distributions of features are called the event model of the Naive Bayes classifier.',
2.2264150943396226),
('This is the event model typically used for document classification, with events representing the occurrence of a word in a single document (see bag of words assumption).',
2.2264150943396226),
('In this manner, the overall classifier can be robust enough to ignore serious deficiencies in its underlying naive probability model.',
2.2075471698113205)]