Kullback-Leibler divergence Kullback-Leibler divergence Let P \mathbf{P} P and Q \mathbf{Q} Q be discrete probability distributions with pmfs p p p and q q q respectively. Let’s also assume and have a common sample space E E E . Then the KL divergence between discrete probability distributions and is defined by KL ( P , Q ) = ∑ x ∈ E p ( x ) ln ( p ( x ) q ( x ) ) . \text {KL}(\mathbf{P}, \mathbf{Q}) = \sum _{x \in E} p(x) \ln \left( \frac{p(x)}{q(x)} \right). KL ( P , Q ) = x ∈ E ∑ p ( x ) ln ( q ( x ) p ( x ) ) . KL ( P , Q ) ≥ 0 \text {KL}(\mathbf{P}, \mathbf{Q}) \geq 0 KL ( P , Q ) ≥ 0 (nonnegative) KL ( P , Q ) = 0 \text {KL}(\mathbf{P}, \mathbf{Q}) = 0 KL ( P , Q ) = 0 only if P \mathbf{P} P and Q \mathbf{Q} Q are the same distribution (definite).
False positive, False negative, Type I error, Type II error False positive, False negative, Type I error, Type II error True False True Correct Type II error False Type I error Correct Binary classification Actual Class Positive Actual Class Negative Assigned Positive True Positive False Positive Assigned Negative False Negative True Negative Statistics A positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; “false” means the conclusion drawn is incorrect. Thus a type I error is a false positive, and a type II error is a false negative. Wikipedia null hypothesis( H 0 ) H_0) H 0 ) True null hypothesis( H 0 ) H_0) H 0 ) False Fail to reject Correct (True Negative) (1- α \alpha α , confidence level) Type II error (False Negative) ( β \beta β ) Reject Type I error (False Positive) ( α \alpha α , signif